Failed To Query Lock Null Status
If your query fails with the error File cannot be opened because it does not exist or it is used by another process and you're sure that both files exist and aren't used by another process, serverless SQL pool can't access the file. This problem usually happens because your Azure AD identity doesn't have rights to access the file or because a firewall is blocking access to the file.
Failed To Query Lock Null Status
The serverless SQL pools can't read files that are being modified while the query is running. The query can't take a lock on the files. If you know that the modification operation is append, you can try to set the following option: "READ_OPTIONS":["ALLOW_INCONSISTENT_READS"].
When you receive a status code 500 on a write operation, the operation may have succeeded or failed. If the write operation is a TransactWriteItem request, then it is OK to retry the operation. If the write operation is a single-item write request such as PutItem, UpdateItem, or DeleteItem, then your application should read the state of the item before retrying the operation, and/or use Condition expressions to ensure that the item remains in a correct state after retrying regardless of whether the prior operation succeeded or failed. If idempotency is a requirement for the write operation, please use TransactWriteItem, which supports idempotent requests by automatically specifying a ClientRequestToken to disambiguate multiple attempts to perform the same action.
This does not tell you why the job failed to submit, but is often is due to a 'invalid' resource submission request, and the scheduler blocks it. But unfortunately, Nextflow does not pick the message reported by the cluster.
Updates and deletes in SingleStore are row locking operations. If a row is currently locked by query q1 running in transaction t1, a second query q2 in transaction t2 that operates on the same row will be blocked until q1 completes.
Typically, a deadlock happens when two or more transactions are writing to the same rows, but in a different order. For example, consider two concurrently running queries q1 and q2 in different transactions, where both q1 and q2 want to write to rows r1 and r2. If the query q1 wants to write to rows r1 and then r2, but the query q2 wants to write to row r2 first and then r1, they will deadlock.
Open transactions hold the locks on rows affected by the transaction until they are committed or rolled back, and any other write query modifying the same rows has to wait for the open transaction to release the locks. If the query has to wait for more than the lock_wait_timeout (default 60 seconds), it fails. This happens most often when the open transaction is idle and unnecessarily holding the locks.
If a query has failed because it waited long enough to exceed the lock_wait_timeout, identify the transaction that is causing the timeout, and kill its connection. Killing the connection rolls back the uncommitted writes of the open transaction. For example, a write operation is performed in the following transaction, but it is not committed.
Note: Although the error message indicates that the lock is owned by the connection with ID 76, it is not the connection that needs to be killed. It is the connection for one of the distributed leaf processes, which holds the lock that our query was waiting on. Instead, we need to kill the connection for the aggregator process, which will roll back the entire transaction.
I'll start by adding the StatusCodePagesMiddleware as I did in my previous post. I'm using the same UseStatusCodePagesWithReExecute as before, and providing the error status code when the pipeline is re-executed using a statusCode querystring parameter:
Note that I've used the null propagator syntax ?. to retrieve the path, as the feature will only be added if the StatusCodePagesMiddleware is re-executing the pipeline. This will avoid any null reference exceptions if the action is executed without using the StatusCodePagesMiddleware, for example by directly requesting /Home/Error?statusCode=404:
SQL error code, error code, Cannot allocate enough memory, Feature not supported, Invalid argument, Authentication failed, Invalid state, Cannot open file, Cannot find file, Service shutting down, Invalid license, Connect attempt outside user's validity period, Persistence error, Transaction error, Transaction rolled back by an internal error, Transaction rolled back by lock wait timeout, Transaction rolled back due to unavailable resource, Transaction rolled back by detected deadlock, Transaction serialization failure, Current operation canceled by request and transaction rolled back, Exceed max num of concurrent transactions, Transaction rollback unique constraint violated, Transaction distribution work failure, Resource busy and NOWAIT specified, Distributed transaction commit failure, failure in acquiring index handle, sql processing error, sql syntax error, insufficient privilege, invalid table name, invalid column name, invalid index name, invalid query name, invalid datatype, inconsistent datatype, column ambiguously defined, too many values, not enough values, duplicate column name, inserted value too large for column, identifier is too long, cannot insert NULL or update to NULL, cannot use duplicate table name, wrong number of arguments, argument type mismatch, unique constraint violated, invalid CHAR or VARCHAR value, "invalid DATE TIME or TIMESTAMP value", division by zero undefined, single-row query returns more than one row, invalid cursor, numeric value out of range, column name already exists, sql error in procedure, cannot drop all columns in a table, invalid sequence, numeric overflow, cannot create index on expression with datatype LOB, cannot use duplicate sequence name, invalid escape sequence, invalid name of function or procedure, invalid user name, zero-length columns are not allowed, invalid number, not all variables bound, invalid datetime format, cannot CREATE UNIQUE INDEX; duplicate key found, string is too long, data manipulation operation not legal on this view, invalid schema name, invalid column view, fail to collect all version garbage, invalid identifier, string is too long, invalidated view, cannot use duplicate user-defined type name, invalid object name, cannot have more than one order by, the user was already dropped before query execution, internal error, "INSERT UPDATE and UPSERT are disallowed on the generated field", invalid privilege namespace, invalid table type, invalid password layout, user is forced to change password, user is deactivated, user is locked; try again later, password change currently not allowed, AFL error, invalid name of package, user's password will expire within few days, invalid expression, could not set system license, only commands for license handling are allowed in current state, table type conversion error, number of columns exceeds its maximum, package manager error, cannot use duplicate trigger name, backup could not be completed, recovery could not be completed, recovery strategy could not be determined, modification of subject table in trigger not allowed, invalid backup id, wrong hint syntax, the predefined session variable cannot be set via SET command, invalidated function, foreign key constraint violation, failed on update or delete by foreign key constraint violation, number of tables exceeds its maximum, SQL internal parse tree depth exceeds its maximum, "Cannot execute trigger was invalidated by object change", hint error, cannot use duplicate data source name, invalid adapter name, invalid remote object name, user defined function runtime error, invalid definition of structured privilege, invalid database name, invalid EPM Query Source definition, predicates are required in a where clause, cannot use duplicate name of task, replication error, cannot execute DDL statement on replication table while replicating, partition error, api error, invalid statement, too many parameters are set, not supported type conversion, session context error, failed routed execution, invalid LOB, exceed maximum LOB size, exceed maximum number of prepared statements, execution aborted by timeout, sql processing error, nesting depth of trigger and procedure is exceeded, cannot use duplicate object name, user not allowed to connect from client, plan stabilizer stored hint error - statement hint record already exists, plan stabilizer stored hint error - statement hint record does not exist, session error, communication error, cannot bind a communication port, error while parsing protocol, unknown hostname, rejected as server is temporarily overloaded, Cyclic dependency found in a runtime procedure, invalid key or invalid size, Column store error, Redo log replay failed, Maximum number of rows per table or partition reached, Metadata error, Distributed metadata error, Distributed environment error, Network error, Distributed SQL error, Invalid protocol or service shutdown during distributed query execution, Remote query execution failure, Invalid privilege, Audit policy with current name already exists, Invalid combination of audit actions, Invalid auditing level, Invalid policy name, Invalid combination of audit action and object type, Audit policy for this object type not supported, [PlanViz] general error, [PlanViz] invalid plan, [PlanViz] plan not found, [PlanViz] unsupported statement type, [PlanViz] error in trace-only mode, Same email address cannot be used for different users, Invalid statement memory limit, Invalid statement thread limit, Duplicate specification of identity for KERBEROS, General ticket error, Only secure connections are allowed, Allocation failed, File not found, Invalid length, Preprocessor: failed, Certificate definition inconsistent, Certificate could not be dropped because it is still in use by at least one PSE, Masking: not supported data type, , KBA , slave , master , HAN-DB-ENG , SAP HANA DB Engines , Problem
From the above information, we can get an overview of the transactions that are currently active in the server. Transaction 3097 is currently locking a row that needs to be accessed by transaction 3100. However, the above output does not tell us the actual query text that could help us figuring out which part of the query/statement/transaction that we need to investigate further. By using the blocker MySQL thread ID 48, let's see what we can gather from MySQL processlist: 041b061a72