Design T-SQL stored procedures.
This objective may include but is not limited to: execution context (EXECUTE AS), table-valued parameters, determining appropriate way to return data, WITH RECOMPILE/OPTION (RECOMPILE), error handling, TRY/CATCH
This section will be dealing with how we define the permissions needed by stored procedures to access underlying objects, how we pass in multi-record inputs, how we return the data necessary to the user, how / when / and why we might need to force a recompilation of the sproc, and how we will catch and handle errors within stored procedures. It doesn’t really sound like there is much in what the stored procedures are, or how they are created. But I would assume that it is assumed we already know this information.
Execution Context (Execute As)
You can define the Execution Context of user-defined modules (excepting inline-scalar functions) including functions, procedures, queues, and triggers. The execution context is switched in order to validate permissions referenced by the module. This allows for tighter security since users will only need to be granted permissions on the user modules, and not on the underlying objects (Ownership Chaining, which was discussed briefly in the Application Security post, must be observed).
A user’s permission to execute the module is checked any time that they execute a module specified to run in a context other than CALLER. However, additional checks on underlying objects are checked against the account specified in the EXECUTE AS clause. This impersonation reverts when the module execution is complete.
A helpful article, Using EXECUTE AS to Create Custom Permission Sets, explains how this clause can be used to grant special permissions, such as granting permissions to TRUNCATE TABLE without granting ALTER TABLE permissions (which are normally required).
Table-Valued Parameters allow record-sets to be passed to and from stored procedures and functions through the use of user-defined table types and a table variable of the appropriate table type. This allows the records to be passed back and forth without having to allow for many parameters, or complex tokenizing logic, as well as providing the flexibility to work with the input in a set-based fashion. It needs to be stated that like all table variables, the server engine will not maintain statistics for the table-valued parameters. Furthermore, these parameters MUST be passed as READ-ONLY! You cannot perform an data modifications on the table valued parameter inside of the stored procedure or function. Also, you cannot use SELECT INTO or EXECUTE INTO to populate the table-valued parameter.
Determining Appropriate Way to Return Data
There are multiple ways to return data through stored procedures. You can return data by using the OUTPUT parameters in order to allow the calling program to capture the data in a variable. When using this method, the OUTPUT keyword will need to be specified in the calling routine as well as in the Stored Procedure definition. Note that it is possible for a variable to be both an input and an output parameter at the same time.
Another way that can be used to return data is by using the RETURN code. With this method, the stored procedure will return an integer value that generally communicates the state of execution at which the stored procedure exited, such as communicating errors or missing values. However, this return code value can be subverted to return any integer value and could therefore be used to pass integer values back and forth between the calling program and the stored procedure. In either case, the value captured will need to be placed into a variable for use in the calling program (and thus, there is little reason to subvert the return value as described above).
The final method described for returning data is to use a cursor data type in the output parameter. With this action, a cursor is returned through the use of the output parameter and can be acted upon as a normal cursor after the object is returned from the stored procedure.
Although not mentioned in the returning data from stored procedures article, I have certainly simply selected data from the database in a stored procedure to return a record set. This method is also used when inserting into a table with an EXECUTE INTO statement.
WITH RECOMPILE / OPTION (RECOMPILE)
Recompiling Stored Procedures can often be a necessity when the data in the underlying structures would change enough that the original execution plan is no longer optimal (or when input parameters would result in an inefficient execution plan). Using the WITH RECOMPILE option in the stored procedure definition means that no execution plan will be cached for the procedure. In other words, it will be recompiled EVERY time that it is run. You can also use the WITH RECOMPILE option when you EXECUTE the stored procedure to recompile it on command. This option is used if the parameter you are supplying is atypical. Finally, you can use the RECOMPILE option to ensure that a single statement within the stored procedure is recompiled, instead of the whole procedure. This also has the added benefit of using local variables as hints when recompiling the statement.
Note that you can further use the sp_recompile system stored procedure to force a recompilation of a stored procedure the next time it is run.
Starting in SQL 2005 the TRY/ CATCH block was added allowing for a stronger error handling framework out of the box. At this time, logic can be encapsulated to trap errors , attempt to determine the source, and come up with a system to run the commands again (in the case of deadlock handling, for instance). Alternately, the error generated during the transaction might be unanticipated, at which point the code should appropriately roll back the transaction(s) and generate appropriate error notifications.
There are several functions used to retrieve the error information: ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(), ERROR_LINE() and ERROR_MESSAGE(). Furthermore, the XACT_STATE() can be used to show the current state of the transaction. This information can be used to determine whether to attempt to rollback the transaction to a save point, rollback the full transaction, report the error, etc.
Erland Sommarskog has a background article on Error Handling that was written for SQL 2000. He also has an unfinished article on Error Handling in SQL Server 2005 (and not much has changed between 2005 and 2008). Both of these articles are extremely valuable.
TRY / CATCH
Using a try catch block will result in any errors encountered while executing the script with a severity of 10 or higher being immediately passed to the catch block. If the severity is less than 10 execution will continue in the try block as normal. If no errors are encountered, execution will pass from the END TRY statement to after the END CATCH statement.
The errors sent to the catch block ARE NOT automatically passed back to the calling application. This is useful in that errors can be handled gracefully within the try catch block. However, this also means that there is the ability to generate errors without ever properly handling them or generating appropriate notifications.
Note that TRY / CATCH blocks must abide by scope logic. Therefore, a try catch block cannot span batches, BEGIN END blocks, etc. TRY / CATCH blocks CAN be nested. Errors will raise to the first CATCH block that is encountered.
TRY / CATCH blocks CANNOT be used in User Defined Functions.
There are some instances where a TRY / CATCH block will not handle the errors, such as when the session is ended with a KILL command, there is a broken connection with the client, or an error of severity 20 or higher which stops the database engine task for the session is encountered, compile errors such as syntax errors, errors that occur during recompilations such as object name resolution.