Netwrix Enterprise Auditor (formerly StealthAUDIT) 11.6 has been released LEARN MORE
Stealthbits

Best Practices – Setting up StealthAUDIT SQL Server Database

Blog >Best Practices – Setting up StealthAUDIT SQL Server Database

If you decided to implement StealthAUDIT to meet your data access governance and security needs, then you made the right decision.  StealthAUDIT core requires a SQL Server database to store the audited data, configuration settings, and certain historic data.  In this blog, I will review some of the best practices when it comes to setting up the SQL Server database to be used by StealthAUDIT.  This blog is not intended to replace the StealthAUDIT Installation Guide, but rather supplement it. 

StealthAUDIT supports both compliance auditing of various IT infrastructure components and sensitive data discovery in both structured (SQL and NoSQL databases) and unstructured data (file shares, Amazon AWS S3, etc.,).  Depending on the number of entities being scanned in a production environment, the amount of data that is written to the backend database is quite large.  Therefore, it is quite challenging to design the back-end database without knowing the StealthAUDIT transaction workload.  In addition, all the Analysis Jobs in StealthAUDIT use the SQL Server tempdb extensively.  Incorrectly sizing the tempdb in the SQL Server instance that also hosts the StealthAUDIT database can lead to serious performance issues. 

SQL Server Version

StealthAUDIT supports SQL Server 2012 through 2019.  SQL Server Express Edition can be used to conduct proofs-of-concept and should not be used for production implementations.  I recommend that you use SQL Server Enterprise Edition (2012 through 2019) for hosting the StealthAUDIT database. 

Database Sizing

The StealthAUDIT installation guide recommends the minimum sizing for the database to be 120 GB for the core platform.  While that is a good starting point, please note that the sizing is highly dependent on solution-set, the number of hosts being scanned, frequency of scanning, historical data retention settings, and reporting requirements.  Each solution-set has its own recommendation when it comes to database sizing.  Depending on the number of solution-sets you are licensed for, I recommend that you take the cumulative requirement and use that as a starting point. 

Database Security

StealthAUDIT supports both SQL Server Authentication and Windows Authentication to the back-end SQL Server database. I recommend using Windows Authentication and the Windows Active Directory (AD) user as the SQL Server login.  The same account can also be used to scan all the IT infrastructure components. This will simplify the task of managing and maintaining all the necessary privileges required by StealthAUDIT using a single login.  While the installation guide recommends the name of the SQL Server login to be StealthAUDIT, it can be modified to comply with your own naming convention. In the SQL Server instance, assign the sysadmin fixed server role to the StealthAUDIT user.  If that is not an option, then the least privileged user model calls for the db_owner fixed database role to be assigned to the user.  Also, the default database schema for the StealthAUDIT user must be dbo.  If you prefer to change the default schema using T-SQL, then use the following SQL statements.  You can also assign privileges and default schema settings using SQL Server Management Studio (SSMS) as shown in the screenshot below.

The SQL script and the screenshot assume that the name of the database is StealthAUDIT and the login name is also StealthAUDIT.  This is not mandatory;

USE [StealthAUDIT]
GO
CREATE USER [StealthAUDIT] FOR LOGIN [StealthAUDIT]
GO
ALTER USER [StealthAUDIT] WITH DEFAULT_SCHEMA=[dbo]
GO

both the database name and the login name can be changed to suit your needs. 

I recommend a dedicated SQL Server instance of StealthAUDIT rather than sharing it with other databases.  Dedicated instances will make troubleshooting, upgrading, and maintenance easier.

High Availability (HA)

Data stored in the StealthAUDIT database is mostly transient in nature and can be reproduced or re-populated very easily.  Therefore, I do not see a need to set up any type of SQL Server HA options such as Always On Failover Clustering or Always On Availability Groups (available in SQL Server 2012 and above). Failover Clustering requires shared storage, whereas Availability Group does not.  In addition, unlike Failover Clustering, Availability Group does not operate at the cluster level, it operates at the database level.  In my opinion, Availability Group is overkill as an HA solution for the StealthAUDIT database. It also drives up the cost of implementation as the Availability Group option is only available in the SQL Server Enterprise Edition. If the SQL Server is virtualized, you might want to consider relying on hypervisor supported HA options.  For example, if you are using VMWare, then vMotion is a good alternative and negates the need to set up an HA solution at the SQL Server level.  If your company mandates an HA solution for all SQL Server databases in your organization, please contact Stealthbits support for further guidance and recommendations.

Database Data Files

The data files are used by the database to persist user data to disk storage.  By default, when a new database is created, an 8 MB data file with auto-growth set to 64 MB chunks is created.  These settings are inherited from the model system database that is part of all SQL Server installations. 

Database Transaction Log Files

A database transaction is a group of Data Manipulation Language (DML) operations that should either succeed or fail as one atomic unit is called a transaction.  Transactions can either be committed or rolled back. Before being committed to data files, transactions are logged to log files for recovery purposes.  Similar to the data file, a single 8 MB transaction log file with auto-growth set to 64 MB is created.  I recommend that the default attributes of the transaction log file be modified to avoid database performance issues. While there are no optimal values for these parameters, I recommend starting with an initial transaction log file size of 2048 MB with 200 MB increments for the auto-growth setting.  The maximum file size should be set to a manageable size and depends on the disk space available and file manageability.  Incorrect sizing of the log files can lead to sluggish query performance and ultimately cause the StealthAUDIT jobs to perform slowly. 

While the settings that I recommend are a good starting point for mid to large StealthAUDIT implementations, periodic monitoring is recommended.  Use DBCC or the system dynamic view to monitor the log file usage and keep an eye on the current log space utilization.  A very high usage watermark means that the log file will keep growing based on the auto-growth setting.  If this trend continues unmitigated, it will lead to a large number of Virtual Log Files (VLFs) and you might have to truncate the log file to reset the used VLF count.  Having a large number of VLFs will impact database performance. Therefore, starting with a large file and setting a large auto-growth setting will minimize the number of VLFs in the long run.  I have seen some DBAs create multiple transaction log files; this does not necessarily help improve database performance.  SQL Server writes information to the log files sequentially and never in parallel.  Therefore, there is no real advantage in creating multiple log files other than wasting disk space.  I would rather have one large log file instead of multiple log files.

USE [StealthAUDIT]
GO
DBCC SQLPERF(LOGSPACE);
GO
SELECT * FROM sys.dm_db_log_space_usage;
GO

Now that you set up OpenSSH on Windows, consider StealthAUDIT for auditing permissions, configuration, vulnerabilities in your Windows, UNIX, and Linux environment.

Recovery Model  

A SQL Server recovery model is a property that can be set at the database level and controls how transactions are logged to the transaction log files.  The recovery model will impact SQL Server backup and restore operations while aiding in transaction log maintenance.

SQL Server supports three different recovery models: simple, full, and bulk-logged.  A database can be reconfigured to use any recovery model at any point in time.

  • Simple – In the simple recovery model, the SQL Database Engine does not continue to persist transactions in the transaction log file.  Transactions are periodically flushed from buffer-cache to transaction log files and then they are written to the data file.  A checkpoint will trigger the process of committing the transactions from the log file to the data file, after which the transaction log file is truncated.  Log file backup is not required in a simple recovery model and supports full and differential SQL Server back-up modes.  The simple recovery model does not allow recovery of data lost due to a corrupted or lost data file, which is a major disadvantage.  Log file backup is not required with the simple recovery model.
  • Full – In the full recovery model, transactions written to the log file will continue to persist in the log until the transaction log file is backed up.  The full recovery model is mandatory when setting up SQL Server HA solutions, such as Always on Availability Groups, log shipping, and database replication.  The full recovery model substantially increases the backup size as the transaction logs along with the data files need to be backed up.  The full recovery model allows for point-in-time recovery based on the availability of the transaction log files and aids in recovery data due to lost or damaged data files.  Log files need to be backed up in the full recovery model.
  • Bulk Logged – In the bulk logged recover model, the data written to the log file is reduced by minimizing logging for most bulk operations.  Log files need to be backed up in the bulk logged recovery model.  The point-in-time database is recovery is not supported with this recovery model.  This recovery model is normally recommended when performing bulk load operations, after which the recovery model should be reverted to full.  Changing the recovery model to bulk logging during bulk load operations will result in improved performance for such jobs.

StealthAUDIT installation guide recommends Simple Recovery Model for the back-end database.  Since the data stored in the StealthAUDIT database is not considered to be mission-critical and can be easily re-captured, I recommend using the simple recovery model for the StealthAUDIT database.  If you choose to set the recovery model to full, please be aware that the StealthAUDIT job performance will be severely impacted.  Please contact Stealthbits support organization before changing the recovery model to full or bulk logged.

TempDB

TempDB in SQL Server is a system database that is available to all users connecting to SQL Server and acts as a junk drawer and is considered to be a global resource. The data file and log file in tempdb are created with an initial size 8 MB with 64M auto-growth setting with the file size set to unlimited.  Regardless of the recovery model set at the database level, operations within tempdb are minimally logged and allow transaction rollback.  Tempdb is recreated every time the SQL Server instance is restarted.

Most of the analysis jobs in StealthAUDIT make use of temporary tables for various operations.  Analysis jobs in StealthAUDIT make use of temporary tables, indexes, index rebuilding, aggregate queries (GROUP BY, ORDER BY), query joins, and union queries among other things.  Default settings or incorrect tempdb settings will severely impact the database performance, thereby causing StealthAUDIT jobs to take longer to complete. 

Microsoft recommends that the number of secondary data files in tempdb is based on the number of logical cores available on the server.  If the logical cores are less than eight, then the number of secondary tempdb data files should be equal to that of the number of cores.  If the logical cores allocated to the SQL Server are more than eight, then the number of secondary files should be eight.  Then if contention continues, the number of data files should be increased by multiples of four until the contention drops to an acceptable level.

I recommend pre-allocating large size tempdb data and log files to accommodate the typical StealthAUDIT workload.  Having multiple data files provides a high degree of parallel efficiency in operations when StealthAUDIT is using tempdb.  The number of data files should be at least eight if the number of logical cores on the server is eight or more to start with.  Each StealthAUDIT job has a thread setting that is user-configurable, by default it is set to 10 threads.  If the tempdb does not support a tenth degree of parallelism due to a lesser number of data files, then the thread workload will be serialized. I recommend that you change the thread count to match the logical core count on the StealthAUDIT server, which should be equal to or less than the logical core count on the SQL Server.  The tempdb data files and transaction log files should be sized like the regular database files, with an initial size of 2048 MB and auto-growth set to 200 MB with a maximum file size limit set.

Microsoft, while admitting that tempdb has been historically a bottleneck to scalability for many workloads, introduced a new feature in SQL Server 2019 called memory-optimized tempdb.  This is part of an in-memory database and you have to opt-in for StealthAUDIT to leverage this new feature. Use the following SQL statements to turn it on and verify the status.  This tempdb configuration change requires a restart of the SQL Server service to take effect.

Collation

In SQL Server, the collation setting is responsible for handling sorting rules, case sensitivity, and accent sensitivity.  A collation defines bit patterns that represent the characters stored in the data dictionary.  SQL Server supports server-level collation and database-level collations.  The server-level collation is set during SQL Server startup and will be inherited by all the system and user databases.

ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON;

SELECT SERVERPROPERTY(‘IsTempdbMetadataMemoryOptimized’);

Database level-collation is used for all the metadata within a given database. If a collation setting is not defined during the database creation, the server collation setting will be inherited by the database.  To complicate things further, SQL Server also supports column-level collation.  In the StealthAUDIT database, both the server and database collation setting should be set to SQL_Latin1_General_CP1_CI_AS.  If they are different you might run into implicit data conversion and other errors.  If you have a different server-level setting other than SQL_Latin1_General_CP1_CI_AS, please contact Stealthbits support for further assistance.  You can verify the server-level collation setting using the following T-SQL statement.

SELECT SERVERPROPERTY(‘collation’);

The database level collation setting can be verified by reviewing the database properties in SQL Server Management Studio (SSMS).

CPU & Memory Settings

Finally, when it comes to CPU and memory allocation to the StealthAUDIT SQL Server, you should follow the guidelines outlined below:

  • CPU – Regardless of whether you are using a physical machine or a virtual machine to host the StealthAUDIT SQL Server, I recommend setting the number of cores to at least 8 (preferably 16) and conducting some bench-mark testing with the most CPU intensive jobs.  Based on the results, consider increasing the number of physical or virtual cores allocated to the server.  If you do leave the cores set to 8, then ensure the thread setting for the StealthAUDIT job is also set to 8.  After the StealthAUDIT implementation is complete and the jobs have been running for a couple of days, review the CPU utilization trend on the server hosting the SQL Server database.  A consistent CPU utilization between 80 to 90% indicates the need to add more CPU cores, this applies to both physical and virtual cores.
  • Memory – SQL Server dynamically adjusts the memory utilization based on available system memory.  If SQL Server needs more memory, it queries the operating system to determine the amount of free memory and claims additional memory if available.  The memory is relinquished back to the operating system when it is no longer needed. This behavior can be overridden using the min server and max server memory and working set size server configuration options.  I recommend allocating at least 32 GB of RAM initially and monitoring the SQL Server Cache Hit Ratio.  It indicates the percentage of data pages read from SQL Server Cache versus physical disk read operations.  Under ideal circumstances, you want to see a 90% or higher cache hit ratio. This indicates that 90% of the time, pages were retrieved from memory (logical reads) and 10% of the pages were retrieved from disk (physical reads).  You can query the buffer cache hit ratio using the following query.
SELECT object_name, counter_name, cntr_value 
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Buffer Manager%'
AND [counter_name] = 'Buffer cache hit ratio'

The SQL Server minimum and maximum memory settings can be changed in the Server Properties pages in SSMS.  I recommend that you do not change these settings and let it be managed automatically.

If you run into any issues while configuring SQL Server to host StealthAUDIT database, please contact our support team.  To learn more about how Stealthbits can help with auditing  your IT infrastructure, visit our website: https://stealthbits.com/active-directory-auditing-solution/

Featured Asset

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

DON’T MISS A POST. SUBSCRIBE TO THE BLOG!


Loading

© 2022 Stealthbits Technologies, Inc.

Start a Free Stealthbits Trial!

No risk. No obligation.

FREE TRIAL