SQLAlchemy is a Python Library created by Mike Bayer to provide a high-level, Pythonic (idiomatically Python) interface to relational databases such as Oracle, DB2, MySQL, PostgreSQL, and SQLite. SQLAlchemy attempts to be unobtrusive to your Python code, allowing you to map plain old Python objects (POPOs) to database tables without substantially changing your existing Python code. SQLAlchemy includes a database server-independent SQL expression language and an object-relational mapper (ORM) that lets you use SQL to persist your application objects automatically.
Doc link : http://docs.sqlalchemy.org/en/latest/core/connections.html#basic-usage
NOTE : It is similar to what hibernate framework does in java.
SQLAlchemy provides the following functionality:
- It maps relational databases into objects
- It manages an applications database connections
- It can create/alter a database layout if it is allowed to
The most powerful feature of SQLAlchemy is the first point: given a table description, it maps the data in tables into classes of objects, where each instance of an object is a row in the table, and can be worked with like a class or structure in code.
Example: Given a table called “Users”, with a FirstName and LastName column. Once the columns are described in the Python code, to add a row to the users table might look like this:
joebruin = User()
joebruin.FirstName = “Joe”
joebruin.LastName = “Bruin”
In any case, there are three possible values for the “Key” attribute:
The meaning of PRI and UNI are quite clear:
- PRI=> primary key
- UNI=> unique key
The third possibility, MUL, (which you asked about) is basically an index that is neither a primary key nor a unique key. The name comes from “multiple” because multiple occurences of the same value are allowed. Straight from the MySQL documentation:
MUL, the column is the first column of a nonunique index in which multiple occurrences of a given value are permitted within the column.”
There is also a final caveat:
“If more than one of the Key values applies to a given column of a table, Key displays the one with the highest priority, in the order
As a general note, the MySQL documentation is quite good. When in doubt, check it out!
SQL injection is a technique often used to attack data driven applications. This is done by including portions of SQL statements in an entry field in an attempt to get the website to pass a newly formed rogue SQL command to the database (e.g., dump the database contents to the attacker). SQL injection is a code injection technique that exploits a security vulnerability in an application’s software. The vulnerability happens when user input is either incorrectly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and unexpectedly executed.
renders one of the following SQL statements by the parent language:
SELECT * FROM users WHERE name = '' OR '1'='1';
SELECT * FROM users WHERE name = '' OR '1'='1' -- ';
If this code were to be used in an authentication procedure then this example could be used to force the selection of a valid username because the evaluation of ‘1’=’1′ is always true.
The following value of “userName” in the statement below would cause the deletion of the “users” table as well as the selection of all data from the “userinfo” table (in essence revealing the information of every user), using an API that allows multiple statements:a’;DROP TABLE users; SELECT * FROM userinfo WHERE ‘t’ = ‘t
This input renders the final SQL statement as follows and specified:
SELECT * FROM users WHERE name = 'a';DROP TABLE users; SELECT *
FROM userinfo WHERE 't' = 't';
It is very handy to see the database changes then in there in the IDE specially when working on hibernate. Net beans has this beautiful service/plugin, where you can fire query, see result set, modify tables and can analyse the association like foreign key and relation ship between tables in the graphical form.
To find all tables that referenced “Organization” table use the following SQL query:
select table_name, constraint_name, status from user_constraints
where constraint_type = ‘R’
and r_constraint_name in
select constraint_name from user_constraints
where constraint_type in (‘P’, ‘U’)
and table_name = ‘ORGANIZATION’
order by table_name, constraint_name;
NOTE: In the query above ‘ORGANIZATION’ can be replaced with the name of any table you are interested in.
When you are dealing with the database as a java programmer, you need not to have data table creation scripts for various databases. Hibernate creates the database tables for you automatically based on the entity relations ships and associations if you configure that way.
The most simple way to configure it is below tag in hibernate.cfg.xml
So the list of possible options are,
- create-drop: executes SchemaExport when SessionFactory initializes and drops the schema at the end of the life of the factory.
- create: creates the schema, destroying previous data.
- update: update the schema according to the changes made in the hibernate entities. eg if some columns/mappings/associations are added or droped.
- validate: validate the schema, makes no changes to the database. if the schema is not up to date to run hibernate program, it will through an error
NOTE : Despite folks in Hib do their best, you simply cannot rely on automatic updates in production. Write you own patches, review them with DBA, test them, then apply them manually.
While working on some customer support issue on reporting in my project, I got to know few things which i would like to share with my readers.
The issue was on a partucular day Customer was not able to generate user access report for all the users of his organization. However the next day they were able to do generate them.
I analyzed production logs and could not find something fishy. So I debugged the app and tried to reproduce the same issue. I tried to generate report with about 4000 users after 2 min i canceled it and tried to generate report for 1000 users, then for 100 users than for 10 users and finally for only 1 user.
But to my surprise report was not getting generated even for 1 user.
While debugging i realzed there were full table scan due and with increated data it was taking some more time. However the catch was, when user keeps on canceling data before getting the output of one report, the Database thread/request is not canceled. On subsiquent request they all gets queued up.
While debugging on my local box as the part of investigation, I observed below facts:
1. When cancel is pressed in user access report wizard – the DB call is not terminated, only UI moves to admin page. Hence, on subsequent report generation the query gets queued up. Which makes report very slow and more you retry by cancelling, more items gets queue up in DB server resulting in endless waiting. Probably on 23 Nov and 6 Oct, there were lot of requests and retries must have happened due to which report could not be generated, however next day is zero items in queue, reports were generating.
2. There was full table scan on app event table which made the query cost very expensive, hence for 100+ users it very slow and for all users like 4665 it was impossible to get the report within tolerable time frame.
3. By introducing the index on session id – full table scan was avoided and response was quite faster for about 100
+ users data was retrieved within 100 seconds and for 4665 (1000+) – firstly report was getting generated that too within 10 min time.
4. There are million of rows getting inserted with every single day which is impacting the report generation for user activity report. Hence we should come up with solution to archive older data and put them in de normalized table, probably on weekly basis using some crone job. So that older data could be fetched faster.
5. As such there in no programmatic logs in the code line of user access report. So whenever generate report is clicked or cancelled nothing is traceable.
As a part of solution design below action items should be considered :-
1. Index session id column on app event and further optimization of table.
2. Terminating the DB process on click of cancel instead of allowing it to get queued up on Database server.
3. Some archival strategy to deal with the increasing data load on APP_EVENT table.
4. Logs should get added for better monitoring.
Default port of servers and services is mainly 8080, in fact oracle XE database default port is also 8080 which gets clashed and create trouble in running other applications.
The procedure to change default oracle database port is as below:
1. login oracle using c:>sqlplus
2. change HTTP port from 8080 to 8083
sql>call dbms_xdb.cfg_update(updateXML(dbms_xdb.cfg_get(), ‘/xdbconfig/sysconfig/protocolconfig/httpconfig/http-port/text()’, 8083));
3. change FTP port from 2100 to 2111
sql>call dbms_xdb.cfg_update(updateXML( dbms_xdb.cfg_get(), ‘/xdbconfig/sysconfig/protocolconfig/ftpconfig/ftp-port/text()’ , 2111));
4. refresh settings
5. to verify that port changed