[PLUG] Flashair and wget

Larry Brigman larry.brigman at gmail.com
Tue Nov 18 18:29:49 UTC 2014


In the shell script, you would add -nc to the second wget command.

BTW, http protocol doesn't support a directory listing.  Where you see
these from a browser it is the web server making an index of the directory
and presenting it as html.  That is expected and proper.

On Tue, Nov 18, 2014 at 10:25 AM, Larry Brigman <larry.brigman at gmail.com>
wrote:

> You didn't miss understand -nc.  You missed the -O option where that
> expects the
> next argument to be the output file retrieved from the URL.
>
>
> On Tue, Nov 18, 2014 at 10:11 AM, John Jason Jordan <johnxj at comcast.net>
> wrote:
>
>> At the Clinic Wes composed a lovely command (that I converted to a
>> shell script) to download files from a Flashair SD+wifi card that lives
>> in a CPAP machine upstairs in my house. The connection is a bit wobbly
>> because the advertised range of the Flashair is 30 feet, and that is
>> about the distance from the machine to my laptop downstairs, plus there
>> is a floor in between. But it does work if I am patient. Here is the
>> command:
>>
>> wget -qO - http://192.168.0.1/DATALOG | grep 'DATALOG.*fname' | sed
>> -e "s/^.*fname\"\:\"//" -e "s/\", \"fsize.*//" | while read line; do
>> wget "http://192.168.0.1/DATALOG/$line";done
>>
>> The reason for the grep and filename search is that Toshiba, in its
>> alleged wisdom, decided to bury the files in the html rather than just
>> use a directory structure like ftp.
>>
>> But there is a problem: Every night the CPAP machine writes eight more
>> small files to the DATALOG folder, where the name of each file starts
>> with the date, e.g.:
>>
>> 20141116_235932_BRP.crc                 (from Sunday night)
>> 20141117_235932_BRP.crc                 (from last night)
>>
>> The normal behavior of wget is to re-download files already downloaded
>> and append .n to the additional copies, so running the script this
>> morning would result in the following files in the folder:
>>
>> 20141116_235932_BRP.crc                 (from Sunday night)
>> 20141116_235932_BRP.crc.1       (from Sunday night)
>> 20141117_235932_BRP.crc                 (from last night)
>>
>> After a while the folder where I store these files on my computer is
>> going to get horribly cluttered. I could add a line to the script to
>> delete all files ending in .1, but considering the time it takes for
>> the script to run (due to the poor connection) it would be far
>> preferable for wget not to download copies in the first place.
>> According to the man page adding -nc (no-clobber) is supposed to do
>> this. So I added -nc like this:
>>
>> wget -qO -nc - http://192.168.0.1/DATALOG ...
>>
>> When I ran the script it downloaded no new files, but created a file
>> 'nc' of zero bytes. WTH? Clearly I have failed to grasp how the -nc
>> option is supposed to work.
>>
>> Are there any wget experts here who can lead me to the light?
>> _______________________________________________
>> PLUG mailing list
>> PLUG at lists.pdxlinux.org
>> http://lists.pdxlinux.org/mailman/listinfo/plug
>>
>
>



More information about the PLUG mailing list