Is their a easy way to do this to find and view the changes
What files are you trying to compare - files on your Garmin, POI files you download from here to your PC, etc.?
POI files down loaded from here. The cvs files that I down load I put every thing in to colums in a gpx file so everthing shows up in the proper places in POI Editor. Some times in my converson their may be a couple of errors such as a simple thing as no comma etc. in the right place and so on. I simply fix them which is no problem, but when a up date come out I have to do it all again. If I know exactly what the up date is its no problem, but I don't always know that.
Look at EXAM DIFF, http://www.prestosoft.com/edp_examdiff.asp It lets you compare two text, csv files and shows you the difference. It is Freeware, but there is also a PRO (paid) version, also.
I'm not sure it will do GPX.
Thanks, that does what I want it to do. Don't need to do gpx files any way. I'm converting to that.
You can also use the FC (file compare) function. It is built-in and free if you don't mind a DOS window.
and off it goes. It's helpful but not necessary that the two are in the same folder. It's also helpful if you CD to that folder where (helpfully) they both reside.
Furthermore if you see quite a few differences, you can pipe the output to a file to capture and examine.
This may seem pretty low-level to most, but thought I'd offer it as yet another alternative.
Finally, if the files are pretty small, you can view them simultaneously, side-by-side, and cross your eyes to view both. Inconsistencies will jump off the screen at you. We do this with contracts.
You need to use special characters for the greater than and less than characters so that those characters and the text between them will show.
Using those, the command and the parameters you intended would show like so:
FC <filename1> <filename2>
I was surprised to see that FC is alive and well on my Vista computer. I used the /? parameter, and the output redirection charcater and got a text file with these instructions:
Compares two files or sets of files and displays the differences between
FC [/A] [/C] [/L] [/LBn] [/N] [/OFF[LINE]] [/T] [/U] [/W] [/nnnn]
FC /B [drive1:][path1]filename1 [drive2:][path2]filename2
/A Displays only first and last lines for each set of differences.
/B Performs a binary comparison.
/C Disregards the case of letters.
/L Compares files as ASCII text.
/LBn Sets the maximum consecutive mismatches to the specified
number of lines.
/N Displays the line numbers on an ASCII comparison.
/OFF[LINE] Do not skip files with offline attribute set.
/T Does not expand tabs to spaces.
/U Compare files as UNICODE text files.
/W Compresses white space (tabs and spaces) for comparison.
/nnnn Specifies the number of consecutive lines that must match
after a mismatch.
Specifies the first file or set of files to compare.
Specifies the second file or set of files to compare.
Thanks for this information. Nice to know there are multiple ways of achieving things.
FC works quite nice for me. You just have to make sure that they are sorted in the same order and I added the /L and /C options.
All of the previous suggestions are good. There is another method of searching for duplicates if you use excel. By combining the old file and the new file in an Excel spreadsheet, duplicates can be located by any given column or multiple columns using the information that can be gotten at the following site.
For my work, I look for duplicates in the longitude, latitude and street addresses. Works darn good, but takes a little getting used to. Works even better if non of the information is changed in either file during an edit session.
If using this method, it is a good idea to insert a blank column in column A in both files (will be deleted later) and place a number or letter (i.e. 1-2 or A-B) in each file. The old file will contain the number 1 and the new file will contain the number 2 (or whatever works). Sort the file on whatever column you want to search for duplicates as the primary sort and column A as the secondary.
Too number the column in each file, simply type the number or letter in the first row, click on it to highlight it, hold the CTRL key down and proceed to the bottom of the file by whatever method you want to use, then press CTRL-D. Done.
I believe this information was previously posted on this site. To eliminate any duplicates not needed, simply change the number or letter in the first column (3 or C), sort on that column, highlight all of the instances of that number or letter, right click and delete. Done.
boy do i feel like a n00b. sorry for the goof-up. thank you for noting and correcting the instructions.
terms | privacy | contactCopyright © 2006-2019