Growing stuff and doing things

Blackbeard

In Bloom
It could be described as a json deserializer I think? I'm no expert.

in a valid json you can make a key reference the python equivalent of a dict so it will prepend that key to the child keys inside like this

ENV:{A1:{CO2:0}} to ->ENV_A1_CO2<- legal column name and bind that to its value "0"

then, say you get a different message it will add that column no problem

but the depth of the json doesn't matter, each level into the json is an underscore in the column name, so it is scalable up to the allowed column char limit and I don't know shit about SQL either so I'm surprised this works
The column char limit will depend on which SQL server you are running. If you are running MYSQL then 64 characters is the limit for columns. MYSQL Doc. If you are running MS SQL Server then it apparently depends on which driver you are using,
"When the Microsoft Access or Microsoft Excel driver is used, column names are limited to 64 characters, and longer names generate an error. When the Paradox driver is used, the maximum column name is 25 characters. When the Text driver is used, the maximum column name is 64 characters, and longer names are truncated."
-MSSQL Doc.
Your naming convention seems pretty tight, so you shouldn't have any problems with column char limit. But are you just inserting all your data in one row in one table? Assuming ENV is your ENVironment, being a base for the naming and the outer most object (or like a container). Then it looks like you are naming your environments A1, A2, etc. and then you have your meat and potatoes for each env: CO2, TDS, RH, etc. Is this accurate? If it is you might want to make a table for each environment, add all your readings as columns and have your primary key be the DATE.
I assume if you are writing to a DB you will be wanting to track data and be able to make reports, this would seem easier to deal with. Hope it helps.

Though I'm only on page 3 here and with 17 pages, for all I know you've got a completely different setup by now lol.
-BB
 

dstroy0

Zeroes and Ones
The column char limit will depend on which SQL server you are running. If you are running MYSQL then 64 characters is the limit for columns. MYSQL Doc. If you are running MS SQL Server then it apparently depends on which driver you are using,
-MSSQL Doc.
Your naming convention seems pretty tight, so you shouldn't have any problems with column char limit. But are you just inserting all your data in one row in one table? Assuming ENV is your ENVironment, being a base for the naming and the outer most object (or like a container). Then it looks like you are naming your environments A1, A2, etc. and then you have your meat and potatoes for each env: CO2, TDS, RH, etc. Is this accurate? If it is you might want to make a table for each environment, add all your readings as columns and have your primary key be the DATE.
I assume if you are writing to a DB you will be wanting to track data and be able to make reports, this would seem easier to deal with. Hope it helps.

Though I'm only on page 3 here and with 17 pages, for all I know you've got a completely different setup by now lol.
-BB
Thank you

Now it’s split into relational tables and each table index is a Unix timestamp that is contained in every json from the microcontroller. The column names are shorter now that the tables are split, and there are no null inserts in frequently used tables and I have plans to split further but it’s really not super necessary right now. That was my plan from the beginning but I wasn’t sure how to execute until I got more familiar with both sql and python.

If you’ve got more ideas that’s cool I’d like to hear them. Some other people have helped me a lot here too there’s lots of knowledge here.
 

Buck5050

Underground Chucker
Yeah I think it was the feed solenoid voltage
:unsure:That's very specific. The solenoid was inaccurately dosing due to wrong control voltages? or was the setting not right?

If I remember correctly you switch up to the 2-part MegaCrop. If my PH swings out of range with that stuff I get a bit of yellowing and some slimmer blades on new growth. Just some food for thought. :passjhomie:
 

dstroy0

Zeroes and Ones
:unsure:That's very specific. The solenoid was inaccurately dosing due to wrong control voltages? or was the setting not right?

If I remember correctly you switch up to the 2-part MegaCrop. If my PH swings out of range with that stuff I get a bit of yellowing and some slimmer blades on new growth. Just some food for thought. :passjhomie:

Like they weren’t opening properly and a lot of the time they wouldn’t open at all
 

Blackbeard

In Bloom
Thank you

Now it’s split into relational tables and each table index is a Unix timestamp that is contained in every json from the microcontroller. The column names are shorter now that the tables are split, and there are no null inserts in frequently used tables and I have plans to split further but it’s really not super necessary right now. That was my plan from the beginning but I wasn’t sure how to execute until I got more familiar with both sql and python.

If you’ve got more ideas that’s cool I’d like to hear them. Some other people have helped me a lot here too there’s lots of knowledge here.
Here's that python script I'm working on

Code:
#! /usr/bin/python3

import serial
import json
import sqlite3
from datetime import datetime

#specify database
db_conn = sqlite3.connect('/home/pi/Desktop/python_scripts/sensor.db')

#constants
BAUD_RATE = 500000
RAW_INPUT_STRING_SLICE_START = 12

#figure out if the object is a valid json
def is_json(myjson):
    try:
        #leverage json.loads to tell us what's wrong with the json, if anything
        json_object = json.loads(myjson)
    except ValueError as e:    
        #useful error report
        print('ERROR is_json() :',e,"for object :")
        print(myjson,"\n")
        return False
    return True

#only pass valid json to this func
def flatten_json(nested_json):
    """
        Flatten json object with nested keys into a single level.
        Args:
            nested_json: A nested json object.
        Returns:
            The flattened json object if successful, None otherwise.
    """
    out = {}

    def flatten(x, name=''):
        if type(x) is dict:
            for a in x:
                flatten(x[a], name + a + '_')
        elif type(x) is list:
            i = 0
            for a in x:
                flatten(a, name + str(i) + '_')
                i += 1
        else:
            out[name[:-1]] = x

    flatten(nested_json)
    return out

#pass valid json to this func
def commit_json(json_data):
    #normalize and flatten
    json_data = json.loads(json_data)
    json_data = flatten_json(json_data)
   
    #var
    columns = []
    values = []
    c = db_conn.cursor()
    #pull existing columns from db
    db_columns = [i[1] for i in c.execute('PRAGMA table_info(myTable)')]
   
    #get key:value pairs
    for column, value in json_data.items():       
        #slice the first 12 char off each column name, which is the controller version in my schema
        if str(column[RAW_INPUT_STRING_SLICE_START:]) not in db_columns:
            try:
                add_column = "ALTER TABLE myTable ADD COLUMN {0}".format(" TEXT".join((str(column[RAW_INPUT_STRING_SLICE_START:]),)))
                #print db query for verif
                print(add_column)
                #add unknown column
                c.execute(add_column)
            except ValueError as e:
                print("ERROR commit_json()->add_column : ", e)
        columns.append(str(column[RAW_INPUT_STRING_SLICE_START:]))
        values.append(str(value))
   
    #uncomment for num bindings
    #print(len(columns)-1)
   
    #uncomment for flat key:value pairs   
    #print(columns, values)
   
    create_query = "create table if not exists myTable({0})".format(" text, ".join(columns))
    insert_query = "INSERT INTO myTable({0}) VALUES(?{1})".format(", ".join(columns), ",?" * (len(columns)-1))
   
    #uncomment for create_query format verif
    #print(create_query)
    #uncomment for insert_query format verif
    print(insert_query)
   
    #print insert start time
    print("insert has started at " + str(datetime.now()))
   
    #make the table if it doesnt exist
    c.execute(create_query)
    #insert rows into database
    c.executemany(insert_query, (values,))
    #empty out values
    values.clear()
    #commit changes to db
    db_conn.commit()
    #close db cursor
    c.close()
   
    #print insert finish time
    print("insert has completed at " + str(datetime.now()) + "\n")
       
       
def main():
    #UART port
    ser = serial.Serial('/dev/ttyS0', BAUD_RATE, timeout=5) #open the serial port       
   
    #sentinel
    var = 1
    #main loop
    while var  == 1:
        #read from stream into line
        line = ser.readline()
        #remove byte type and carriage return
        line = line.decode('ascii')
        line = line.rstrip()
        #wrap object
        line = '[' + line + ']'
       
        #uncomment to view raw object
        #print(line) #print 'line'
               
        #test if json in 'line' is valid json       
        if is_json(line) == True:
            try:
                commit_json(line)
            except ValueError as e:
                print('ERROR main() :',e)
#run
main()
Data validation, try/catch blocks, error handling, recursion... You might be learning python, but you definitely been in the saddle before. Nice job (y)
 
Top Bottom