Batch Processing

The ORA-02399: exceeded maximum connect time, you are being logged off error in Oracle indicates that a session was forcibly terminated because it exceeded the maximum allowed connection time. This is often caused by long-running queries that take too much time to execute or retrieve results.


To avoid this error when executing queries in Python with Oracle, you can break down large queries or optimize the execution process. Here are the steps to achieve that:


1. Optimize the Query

• Add Filters: Reduce the result set by filtering unnecessary rows using WHERE conditions.

• Limit Rows: Use ROWNUM or FETCH FIRST N ROWS ONLY to limit the number of rows returned.


SELECT * 

FROM my_table

WHERE some_condition

FETCH FIRST 1000 ROWS ONLY;



• Use Indexes: Ensure the query uses appropriate indexes to speed up data retrieval.

• Avoid Large Joins: If the query involves multiple joins, consider splitting it into smaller queries or materialized views.


2. Break Down the Query

• Split large queries into smaller chunks based on a logical key (e.g., time ranges, primary keys, or batch processing). Here’s how you can do this:


Example: Fetching Data in Batches


Instead of retrieving all rows in one query, break it into smaller chunks using a key column, such as ROWNUM, ID, or DATE.


import cx_Oracle


# Connect to the Oracle database

connection = cx_Oracle.connect("username", "password", "hostname/service_name")

cursor = connection.cursor()


# Define batch size

batch_size = 1000

start_row = 1


while True:

  # Execute query with row limit

  query = f"""

    SELECT *

    FROM (

      SELECT ROWNUM AS rnum, a.*

      FROM my_table a

      WHERE ROWNUM <= {start_row + batch_size - 1}

    )

    WHERE rnum >= {start_row}

  """

  cursor.execute(query)


  # Fetch data

  rows = cursor.fetchall()

  if not rows:

    break # Exit loop when no more rows


  # Process the rows

  for row in rows:

    print(row) # Replace with your processing logic


  # Increment batch

  start_row += batch_size


cursor.close()

connection.close()


3. Use Pagination


If the query returns a large number of results, implement pagination by using a combination of OFFSET and FETCH clauses:


page_size = 1000

offset = 0


while True:

  query = f"""

    SELECT *

    FROM my_table

    ORDER BY id

    OFFSET {offset} ROWS FETCH NEXT {page_size} ROWS ONLY

  """

  cursor.execute(query)

  rows = cursor.fetchall()


  if not rows:

    break # Exit loop when no more rows


  # Process rows

  for row in rows:

    print(row)


  offset += page_size


4. Timeout Handling in Python


Set a query timeout or connection timeout in Python to avoid hanging queries. This can prevent long-running queries from causing issues.


# Example: Setting query timeout in cx_Oracle

import cx_Oracle


connection = cx_Oracle.connect("username", "password", "hostname/service_name")

cursor = connection.cursor()


# Set statement timeout (in milliseconds)

cursor.call_timeout = 5000 # Timeout after 5 seconds


5. Execute as Background Jobs in Oracle


For queries that require a lot of processing, consider offloading them to the database as background jobs using Oracle’s DBMS_SCHEDULER or DBMS_JOB. Once the job completes, fetch the results.


6. Close Idle Connections


Ensure Python scripts properly close idle database connections and cursors to avoid excessive usage of database resources:


cursor.close()

connection.close()


7. Increase Maximum Connect Time (If Possible)


If you have control over the database, you can increase the maximum connect time limit. Consult your DBA to modify the CONNECT_TIME parameter in the user profile:


ALTER PROFILE default LIMIT CONNECT_TIME UNLIMITED;


Note: This option should only be used if long-running queries are unavoidable and other optimizations have been applied.


By splitting queries into manageable chunks, optimizing the query, or adjusting the timeout settings, you can prevent the ORA-02399 error and ensure smooth query execution in Python. 



From Blogger iPhone client