Problem stations
Contents
special cleaning solution
The cleaning solution just uses a small subset of sites including the Denali sites and sites that are often messy. Run it like the line below or see the make-clean file.
Alaska_cleaning_solution 07aug12
more commands
del_pt
If only a few points have bad pseudorange you can use
del_pt
to delete those points. Because there are so few of them I (Jeff) think it is better to remove both phase and pseudorange data for these points, as opposed to removing all pseudorange data for GPS37 using del_pcode_arc. Also, in my experience when there is a few points like this, often the phase data are bad also. So I would go back to the qm directory and use del_pt by hand to remove the points.
An example:
1448/flt> allbadp 07oct08gmas____u0.postlog -119496000.00 GMAS GPS37 8-OCT-2007 15:44:46.0000 110 -119495000.00 GMAS GPS37 8-OCT-2007 15:49:46.0000 110 -119494000.00 GMAS GPS37 8-OCT-2007 16:14:46.0000 110 -119494000.00 GMAS GPS37 8-OCT-2007 16:09:46.0000 110 -119494000.00 GMAS GPS37 8-OCT-2007 16:04:46.0000 110 -119494000.00 GMAS GPS37 8-OCT-2007 15:59:46.0000 110 -119494000.00 GMAS GPS37 8-OCT-2007 15:54:46.0000 110 447900.00 GMAS GPS30 8-OCT-2007 15:44:46.0000 110 435303.00 GMAS GPS54 8-OCT-2007 16:09:46.0000 110 434522.00 GMAS GPS24 8-OCT-2007 16:14:46.0000 110 433306.00 GMAS GPS24 8-OCT-2007 16:09:46.0000 110 433262.00 GMAS GPS24 8-OCT-2007 16:04:46.0000 110 432806.00 GMAS GPS36 8-OCT-2007 16:04:46.0000 110 432401.00 GMAS GPS36 8-OCT-2007 16:14:46.0000 110 432376.00 GMAS GPS36 8-OCT-2007 15:59:46.0000 110 > cd ../qm > gunzip *08gmas* > del_pt *08gmas* GMAS GPS37 "8-OCT-2007 15:45" # and repeat del_pt for the other 6 points
pppclean
If a station solution seems to have quite a few cycle slips, then I (Jeff) would run
pppclean
on the command line to let it do its work automatically. You might have to repeat pppclean again.
An example:
pppclean 07oct08gmas____u0
It is perfectly OK to do this, because this is what autoclean would have done if it had not been messed up by a few bad points. But if there are small cycle slips or a few outliers left, pppclean will not find them (it uses a 10 cm tolerance for jumps in the residuals).
del_qm
To delete bad data you can use the
del_qm
command.
You might need to start with the original .qm file, because autoclean will have messed it up.
cd $ANALYSIS/wwww/qm gzip *ddxxxx* cp -p original/*ddxxxx* .
Then
gunzip *ddxxxx* mv yymmmddxxxx____??.qm tmp.qm del_qm -i tmp.qm -o yymmmddxxxx____??.qm -g XXXX -t1_char "DD-MMM-YYYY HH:MM" -t2_char "DD-MMM-YYYY HH:MM"
Check out del_qm -h for more information/options on the command. The station and satellite specifications for del_qm are "OR" operations, not "AND" operations. This is one reason why you have to be really careful about using it.
Set the CAMP variable (if not set):
setenv CAMP $ANALYSIS/wwww
And run
pppsolve *ddxxxx*
Or
cd $ANALYSIS/wwww/flt pppclean yymmmddxxxx____??
read_timeseries
When I use the read_timeseries function, I give it arguments for the sigma tolerance, and that makes it throw out solutions with larger sigmas.
assignments of problems
Here should be an overview/short description about the problems the following stations have/had.
299C huge outlier found in the time series plots
AC16 new site shows up suddenly in our database
LOGC is historically noisy (bad sky view)
PETP has episodic problems with severe RF interference
ZECK has a history of being a bad station from time to time
299C
There was a huge outlier found in the time series plots. I found the real problem. Look at these lines from the .pfiles file:
2005.5753 299C . 217.924221759 64.028943706 745.1622 2.8 3.0 5.7 -0.054 -0.081 0.264 /gps/analysis/1333/post/05jul29alaska2.0_nfxigs03.poscov 2005.5781 299C . 217.924221808 64.028943690 745.1676 2.8 3.0 5.7 -0.056 -0.093 0.275 /gps/analysis/1333/post/05jul30alaska2.0_nfxigs03.poscov 2005.5808 299C . 217.936350543 64.031716461 726.6352 2.7 2.9 5.5 -0.058 -0.086 0.296 /gps/analysis/1334/post/05jul31alaska2.0_nfxigs03.poscov 2005.5836 299C . 217.924221708 64.028943689 745.1762 3.0 3.1 5.9 -0.047 -0.080 0.259 /gps/analysis/1334/post/05aug01alaska2.0_nfxigs03.poscov 2005.5863 299C . 217.924221781 64.028943654 745.1625 2.7 2.9 5.5 -0.049 -0.101 0.278 /gps/analysis/1334/post/05aug02alaska2.0_nfxigs03.poscov
Clearly one does not match: 05jul31. And looking in the directory for that week it is easy to see why:
EVEREST 1334/flt> ls -l *alaska*point* -rw-rw-r-- 1 akda gipsy 573 2007-09-12 04:55 05aug01alaska2.0_nf.point.gz -rw-rw-r-- 1 akda gipsy 688 2007-09-12 18:51 05aug02alaska2.0_nf.point.gz -rw-rw-r-- 1 akda gipsy 604 2007-09-13 06:36 05aug03alaska2.0_nf.point.gz -rw-rw-r-- 1 akda gipsy 586 2006-09-12 12:06 05aug05alaska2.0_nf.point.gz -rw-rw-r-- 1 akda gipsy 268 2006-09-12 22:06 05aug06alaska2.0_nf.point.gz -rw-rw-r-- 1 akda gipsy 1963702 2007-09-13 17:50 05jul31alaska2.0_nf.point.gz
Several days of this week were recently rerun for some reason (possibly to add data, or some minor cleaning done). But one of them contains some awful data. Because the solutions in this week were either run back in 2006 or just recently, my guess is that this is our old friend WES2 messing up another day. There is a point positioning solution in the directory that confirms that. So when WES2 is fixed and the solution is rerun, all should be well again.
What causes problems like this? I went back to the original qm file and reran the point positioning solution, and the bad data are quite clear. Do this:
cd /gps/data/1334/flt postplot WES2 ALL *31wes2*fit -d 110
FYI, all the time series plots for Alaska were clean of serious outliers when I left. So if you see others like this on any station it probably means that a solution rerun since then has a serious problem. Also, if you are rerunning any solution that was last run before February 2007, it is worth your time to check whether this is one of those days when WES2 had bad data, or whether any other station where the data were added recently might have a problem. This should often tell you:
ls -l *____*point*
AC16
AC16 is a new PBO site which shows up in our database since Week 1440. The following is from Prof. Jeff's email about how to deal with this kind of sites:
Every file for site AC16, a new PBO site, had large residuals and postbreak suggested that many more ambiguities needed to be added every day. When this happens and the site is new, the first thing you should suspect is that the coordinates were bad. This was the cause in this case. I had gotten the coordinates from UNAVCO, but in their "pre-release" log they obviously had bad coordinates.
This example can helps explain why putting in bad starting coordinates can cause so much trouble. Normally it is not a problem, but for some reason UNAVCO had a log file on their website with bad values. I (Prof. Jeff) entered information for the following PBO sites from the same "pre-release" part of their website, so please be watchful for the following sites in case other sites also have bad data. I will see if I can check the site information before the data actually show up.
AB09 RAZORBACK PBO (PBO 00000000-XXXXX-00000) AB35 YAKATAGA PBO (PBO 00000000-XXXXX-00000) AB45 SAG RIVER PBO (PBO 00000000-XXXXX-00000) AB46 ARCTIC VILL PBO (PBO 00000000-XXXXX-00000) AC07 BUCKLAND PBO (PBO 00000000-XXXXX-00000) AC08 CAPE DOUGLASPBO (PBO 00000000-XXXXX-00000) AC09 KAYAK ISLAND PBO (PBO 00000000-XXXXX-00000) AC16 DEEP WATER BAY (PBO 00000000-XXXXX-00000) AC30 MONTAGUE ISL PBO (PBO 00000000-XXXXX-00000) AC33 TOKO DENALI PBO (PBO 00000000-XXXXX-00000) AC37 LAKE CLARK PBO (PBO 00000000-XXXXX-00000) AC42 SANAK PBO (PBO 00000000-XXXXX-00000) AC43 SEAL ROCKS PBO (PBO 00000000-XXXXX-00000) AC47 SLOPE MTN PBO (PBO 00000000-XXXXX-00000) AC48 NAKED ISLAND PBO (PBO 00000000-XXXXX-00000) AC51 STRANDLINE PBO (PBO 00000000-XXXXX-00000) AC52 PILOT POINT PBO (PBO 00000000-XXXXX-00000)
Yes, coordinates were bad. Here is the end of the fltlog file for one of the days, and
the ESTIMATE for the positions "STA X", "STA Y", and "STA Z" are about 10 km, which is
enough to cause big problems.
PB GPS27 AC16 PB GPS59 AC16 PB GPS38 AC16 STA X AC16
ESTIMATE 2.94645512E-03 6.41456198E-03 6.48651987E-03 1.03611079E+01
STA Y AC16 STA Z AC16 TRPAZCOSAC16 TRPAZSINAC16
ESTIMATE -1.34061542E+01 1.02130169 7.24510153E-05 -1.29543743E-04
0.815u 0.355s 0:01.55 74.8% 0+0k 0+0io 9pf+0w ------ exit filter ------ 1494,1 Bot
When you see a new station suddenly appear and every day is bad, this is the first thing to check. I entered the coordinates I got from UNAVCO log file, but obviously their log file had bad coordinates.
If this is a trend, they might have bad coordinates for more new sites so keep watching.
The solution to this problem is fairly simple but involves some extra work because there are about 2 months of data. The site started in week 1440, and this will affect every week since then.
The first step is to find the problem, which involves looking at the *.fltlog file. Now that I know what the problem is, I will first fix the entry in the /goa/stalocs/stalocs file so that it has a good coordinate, and then I will have to go back to the original qm files and rerun autoclean, because no doubt autoclean inserted many false ambiguities before you even saw it.
This is what we had (from UNAVCO) for the site (AC16): DEEP WATER BAY (PBO 00000000-XXXXX-00000) main -2681865.7234 -1649894.8013 5528120.4020
But the RINEX file (stored in $RAWDATA/2007/250/ac162500.07d.gz) says:
-2671508.2533 -1663306.7876 5529148.2252 APPROX POSITION XYZ
That matches the shift in the solution, so the RINEX file coordinates seem to be OK.
(1) Fix the stalocs file:
cd /goa/stalocs sccs edit stalocs vi stalocs
(update the file) sccs delget stalocs (enter a comment when prompted)
(2) Go back to the original qm files:
cd /gps/data gzip 144?/qm/*ac16*qm (this makes sure that all files are compressed so we don't get duplicate files)
foreach week ( 144? )
echo "Replacing files for week $week" /bin/cp -pf $week/qm/original/*ac16* $week/qm rm $week/flt/*ac16* # The /bin/cp -pf preserves file times and should not prompt us to overwrite.
Then we get rid of the files in flt/
end
(3) Tell autoclean to do the cleaning over again.
This is trickier, so be careful if you do it yourself. Accidentally deleting the wrong thing can cause problems. The autoclean control files are in /gipsy/control/edit-request/, and there is one file per week.
cd /gipsy/control/edit-request vi 144?.edit
In each file you have to search for all the filenames for AC16, and then delete everything after the filename on each line. This is tedious, and is a good sign that the system was designed assuming you would not need to do this. So you change the lines that start out like this:
1440/qm/07aug18ftp4____b0.qm # edited by pppclean at Mon Oct 22 05:27:31 AKDT 2007 1440/qm/07aug12ac16____b0.qm # edited by pppclean at Tue Oct 23 05:40:24 AKDT 2007 1440/qm/07aug13ac16____b0.qm # edited by pppclean at Tue Oct 23 05:41:55 AKDT 2007 1440/qm/07aug14ac16____b0.qm # edited by pppclean at Tue Oct 23 05:43:20 AKDT 2007 1440/qm/07aug15ac16____b0.qm # edited by pppclean at Tue Oct 23 05:44:55 AKDT 2007 1440/qm/07aug16ac16____b0.qm # edited by pppclean at Tue Oct 23 05:46:34 AKDT 2007 1440/qm/07aug17ac16____b0.qm # edited by pppclean at Tue Oct 23 05:48:06 AKDT 2007 1440/qm/07aug18ac16____b0.qm # edited by pppclean at Tue Oct 23 05:49:28 AKDT 2007 1440/qm/07aug12alsc____b0.qm # edited by pppclean at Wed Oct 24 06:14:42 AKDT 2007
to be like this:
1440/qm/07aug18ftp4____b0.qm # edited by pppclean at Mon Oct 22 05:27:31 AKDT 2007 1440/qm/07aug12ac16____b0.qm 1440/qm/07aug13ac16____b0.qm 1440/qm/07aug14ac16____b0.qm 1440/qm/07aug15ac16____b0.qm 1440/qm/07aug16ac16____b0.qm 1440/qm/07aug17ac16____b0.qm 1440/qm/07aug18ac16____b0.qm 1440/qm/07aug12alsc____b0.qm # edited by pppclean at Wed Oct 24 06:14:42 AKDT 2007
Repeat for all the files. The only good thing about this is that for most of the weeks, all the AC16 data was processed at once so except for the last week or two all the AC16 files will be in order on successive lines. In the process of doing this, I found that week 1450 was affected, too, so I went back and copied over the original qm files for week 1450.
(4) Now rerun autoclean.
Just type "autoclean". It checks every past week for new files to clean, in within a minute or so it is working on the AC16 files. It finds a few slips in some of the files, rather than many slips in every file. So this is normal.
====================== Editing week 1439 ====================== ====================== Editing week 1440 ====================== Cleaning qm file: 1440/qm/07aug12ac16____b0.qm Slip tolerance was 50 new 1 12-AUG-2007 01:24:46.00 PHASE#10$1 AC16 GPS58 el= 19.6 cm= -72.3 Slip tolerance was 10 ... no cycle slips found Cleaning qm file: 1440/qm/07aug13ac16____b0.qm ... no cycle slips found Cleaning qm file: 1440/qm/07aug14ac16____b0.qm Slip tolerance was 10 ... no cycle slips found
(and so on)
IRKJ - good example of odd receiver behavior - oscillating psuedorange residuals
The psuedorange residuals for all satellites follow a strange pattern, between -200 and +600. Residuals are similar for all satellites. What is happening here, I think, is some instability of the receiver clock or of something similar to that. Something like this happened with the site SHAO a few years ago, except with larger magnitude. The phase data is very clean.
There are only two things you can do in this case. One is to leave the data alone. Don't try to delete the data point by point, and don't try to delete just the pseudorange data for everything. Instead, you can add this to a list we keep of days where ALL pseudorange data should be ignored. Then you don't delete anything -- the data file stays the same but we only use the phase.
vi /gipsy/info/Problem.bad_pcode (go to the end of the file) add a line that reads 07aug16 IRKJ
That's it. In all future runs, the pseudorange will be ignored. There are many lines in this file -- unfortunately most of them are from data where some earlier version of automatic cleaning flagged the pseudorange as bad, even though in reality it was just a case of some points needing to be deleted.
LOGC
LOGC is historically noisy, which is easy to understand if you have ever seen it (bad sky view). Also, once it starts to snow, the noise level at LOGC rises. So expect to see more bad points then -- we might see some of its effect at the very end of the data we currently have. I have not seen any solution to the problem of LOGC other than deleting all the outliers. It's a pain.
PETP
The site PETP has episodic problems with severe RF interference. A good example is 07oct23 in week 1450 (files also copied over to the /gps/data/bad_examples/flt/ directory).
postplot PETP ALL *23petp*fit
If you look at this plot you will see a period of significantly elevated residuals from about hour 26 to hour 34. Then the residuals get small again and go back to normal. I have tried many times before to fix up PETP files that look like this, and the best solution is to remove all the data from this window. If you add -xel to the postplot line, you can see that there is almost no elevation dependence of the residuals, and if you add -xaz instead you can see that the largest residuals are concentrated in a certain range of azimuths.
There is a later period of large residuals, but you can see that the residuals for GPS35 form a stair-step pattern. I count 4 small cycle slips on that satellite over about a 3 hour span. I would figure out the times and run add_amb manually to add ambiguities at those three points (short_hand will not handle this well because the times of the slips do not line up with times of large residuals, so do it by hand), and then after doing that and removing the first bad window I would run the point position again and see what is left).
It is possible that PETP will be like this most days for a month or so, then will suddenly stop and go back to normal. You will not see these problems at the same time of day -- it depends on when they turn on whatever powerful transmitter is causing the interference.
ZECK
week 1433, day 29
Final assessment of ZECK data from this day: it is junk.
I'll elaborate on my assessment so that you can learn how I approached it. Part of what contributes to that is that I know ZECK has a history of being a bad station from time to time, although not every day by any means. Maybe it has some intermittent RF interference? I went back to the original file, and it had some enormous pseudorange residuals for GPS37. These would have caused autoclean to mess up the file, and I am pretty sure that these bad pseudorange points would have still been obvious in the point positioning solution left after autoclean. If so, the first thing you should have done was go back to the original file, rerun pppsolve
., and then deleted the bad pseudorange for GPS37 using del_pcode_arc
.
Then I reran it and got something that looked a lot like what you had in your file after more cleaning, just not quite as bad. I'll leave those files there for now so you can look at them and then delete the files (also move the qm file to a bad subdirectory). What I saw was that some satellites looked OK, but several of them had excursions of up to 20-30 cm magnitude. If you look at some of the worst ones (see list below), they do not look normal at all. The later part of ZECK-GPS41, for example, looks like a staircase going down to the right. That might be a sign of several small cycle slips in the data. Looking at other satellites, I can see 5-10 cm jumps all over the place.
maxi 07jun29zeck____a0.postlog | head -25.06 ZECK GPS29 29-JUN-2007 17:04:46.0000 120 -24.61 ZECK GPS29 29-JUN-2007 17:09:46.0000 120 20.00 ZECK GPS41 29-JUN-2007 17:49:46.0000 120 19.29 ZECK GPS56 29-JUN-2007 10:49:46.0000 120 -18.87 ZECK GPS40 29-JUN-2007 10:39:46.0000 120 17.94 ZECK GPS46 29-JUN-2007 08:14:46.0000 120 17.90 ZECK GPS27 29-JUN-2007 08:39:46.0000 120 -17.86 ZECK GPS34 29-JUN-2007 07:54:46.0000 120 -17.26 ZECK GPS40 29-JUN-2007 10:44:46.0000 120 -17.26 ZECK GPS32 29-JUN-2007 08:39:46.0000 120
What this means is that maybe, after a LOT of work, you might be able to rescue this file. And unfortunately the problem is nto confined to just one small length of time, but is spread out over most of the day. When I see this with a station that has a history of periods of bad data, it is usually a sign that it is time to throw it out. Or perhaps to try once or twice and see if it gets better, and throw it out if it does not. I'm guessing that you went several iterations and didn't get anything that looked right, so certainly you will be justified in moving this file to the bad/ subdirectory. Be sure to create the qm/bad subdirectory if it does not already exist before you do a mv *29zeck* bad, otherwise you will just rename the file.