[PLUG] Capture of CSV data

Larry Brigman larry.brigman at gmail.com
Tue Jun 24 04:28:42 UTC 2014


In looking at the data in the sample, I would recommend taking the first
two fields and converting that into epoch seconds.
You can easily, convert that back into any date format during reports or at
least combine the two fields into
a single field at the database as a date/time field which will do the
conversion internally.

If you don't connect to the port, how much data does it store before you
start loosing data?
Are all three ports different data for the same machine.  Does each port
data need to be synced up?



On Mon, Jun 23, 2014 at 8:51 PM, Chuck Hast <wchast at gmail.com> wrote:

> Forgot, one row = 1 bottle, so that means that a machine is outputting
> about 180 rows/minute.
>
>
> On Mon, Jun 23, 2014 at 8:49 PM, Chuck Hast <wchast at gmail.com> wrote:
>
> > Well folks, this is what is neat about Linux Open Source and all of the
> > people
> > associated with it. I have a bundle of info here and plenty of entry
> > points at
> > all levels to start poking at it.
> >
> > I figured that a flat file was a no-no, but wanted to make sure. The row
> > rate is based on the number of bottle inspections the machine is doing,
> > today we are running heavy bottles so the machines are only doing about
> > 180 bpm (bottles per minute) this is more or less a weaponized wine
> > bottle, if you are in a bar fight you WANT one of these in your hand.
> >
> > When we run lighter ware we may run as high as 280bpm, this is just
> > one of three outputs, the one that is available on port 9010, is almost 3
> > times as long and has more info in it, and there is a binary output on
> > port 9050, but for right now the data on port 9030 is enough to get
> > started.
> >
> > Tomorrow at work I will start poking at the data capture end and see what
> > I can do there, then I will start looking at setting up a db (I have
> > muddled
> > through some work on them, I run ZoneMinder and every once in a wile
> > i have to go in and clean the MySQL db on some of the ZM machines
> > though of late I have not had to do it, so they must be doing some work
> > there.
> >
> > I figure that once I can get those rows of data into a place where they
> can
> > be used, then I can move on to other machines, the format is more or less
> > the same for all of them, so once I can work with that one I can move to
> > the hot end (the moulding part of the plant) and capture that data for
> > processing, capture the data off of the cold end, and then they can get
> > even more precise data on which moulds are causing issues etc.
> >
> > Again, I am all ears and will give it all  a try.
> >
> >
> >
> > On Mon, Jun 23, 2014 at 7:34 PM, Benjamin Foote <plug at bnf.net> wrote:
> >
> >> Chuck, All,
> >>
> >> As you approach analysis of that data I highly suggesting sending it to
> an
> >> ELK stack (ElasticSearch, Logstash, Kibana).
> >>
> >> http://logstash.net
> >> http://logstash.net/docs/1.4.1/filters/csv
> >> http://www.elasticsearch.org/overview/kibana/
> >>
> >> More than just excellent graphing and log analysis, the interface is
> >> simple
> >> yet powerful.  The ability to make custom dashboards that I can hand to
> a
> >> developer and offer them "insight" into what's going on has been
> >> instrumental in "effecting change" :)
> >>
> >> ben
> >>
> >>
> >> Benjamin Foote
> >> Linux System Administration and Development
> >> 503-313-5379
> >> ben at bnf.net
> >> http://bnf.net
> >> @bnf
> >>
> >>
> >> On Mon, Jun 23, 2014 at 3:34 PM, Rich Shepard <rshepard at appl-ecosys.com
> >
> >> wrote:
> >>
> >> > On Mon, 23 Jun 2014, Chuck Hast wrote:
> >> >
> >> > > To be a bit more clear, I am not sure how to get the data off of the
> >> > > communications medium and into a db, that is where I am at loss. I
> >> assume
> >> > > that I have to create a table in the db that is laid out like each
> >> row,
> >> > > and then have a tool that can pipe the data from the source into the
> >> db.
> >> >
> >> > Chuck
> >> >
> >> >    Ah! You're not familiar with databases. Each table has a series of
> >> > columns
> >> > that store the attributes; e.g., mould number, through-put rate,
> number
> >> of
> >> > rejects, etc. Each row in the table is a unique set of those
> attributes
> >> > identified by (perhaps) mould number and time.
> >> >
> >> > > The format of the connection is
> >> > > http://URL/ipaddy:PortNum
> >> > > PortNum may be 9010, 9030 or 9050, the last one spits out the data
> >> > > in binary format, I will deal with that later, right now I just want
> >> to
> >> > get
> >> > > my data in the CSV format stuffed into a db.
> >> >
> >> >    Off-hand I cannot give you an answer because I've not done this
> >> before.
> >> > However, I know it can be done quickly and easily with python and
> >> psycopg2.
> >> >
> >> >    You might consider contracting with a python/postgres coder to
> >> quickly
> >> > write the application for you. There are a bunch available locally.
> >> >
> >> > Rich
> >> > _______________________________________________
> >> > PLUG mailing list
> >> > PLUG at lists.pdxlinux.org
> >> > http://lists.pdxlinux.org/mailman/listinfo/plug
> >> >
> >> _______________________________________________
> >> PLUG mailing list
> >> PLUG at lists.pdxlinux.org
> >> http://lists.pdxlinux.org/mailman/listinfo/plug
> >>
> >
> >
> >
> > --
> >
> > Chuck Hast  -- KP4DJT --
> > Glass, five thousand years of history and getting better.
> > The only container material that the USDA gives blanket approval on.
> >
> >
> >
>
>
> --
>
> Chuck Hast  -- KP4DJT --
> Glass, five thousand years of history and getting better.
> The only container material that the USDA gives blanket approval on.
> _______________________________________________
> PLUG mailing list
> PLUG at lists.pdxlinux.org
> http://lists.pdxlinux.org/mailman/listinfo/plug
>



More information about the PLUG mailing list