Berkeley DB Java Edition
version 18.1.11
Package com.sleepycat.je

Class EnvironmentStats

getNLNsMigrated() "Number of active LNs that were migrated by logging them."
getNLNsMarked() "Number of active LNs in temporary DBs that were migrated by dirtying them."
getNLNsLocked() "Number of potentially active LNs that were added to the pending queue because they were locked."
getNINsMigrated() "Number of active INs that were migrated by dirtying them."
getNBINDeltasMigrated() "Number of active BIN-deltas that were migrated by dirtying them."

The stats above have the following meanings:

The stats above provide a break down of cleaned entries as follows:

When LNs are processed, a queue is used to reduce Btree lookups. LNs are added to the queue when cleaning is needed (they are not known-obsolete). When the queue fills, the oldest LN in the queue is processed. If the LN is found in the Btree, the other LNs in the queue are checked to see if they have the same parent BIN. If so, these LNs can be processed while the BIN is latched, without an additional Btree lookup. The number of such LNs is indicated by the following stat:

getNLNQueueHits() "Number of potentially active LNs that did not require a separate Btree lookup."

The LN queue is most beneficial when LNs are inserted or updated in key order. The maximum size of the queue, expressed as its maximum memory size, can be changed via the EnvironmentConfig.CLEANER_LOOK_AHEAD_CACHE_SIZE param.

Cleaning Statistics: Pending LNs and DBs

When the cleaner is processing a Btree entry (LN or IN) there are two cases where completion of cleaning (and deletion of the file) must be deferred.

If one of these conditions occurs, the LN or DB is added to a pending queue. The cleaner will periodically process the entries in the queue and attempt to resolve them as follows.

When there are no more pending LNs and DBs for a given file then cleaning of the file will be considered complete and it will become a candidate for deletion after the next checkpoint. If a pending entry causes file deletion to be delayed, because the pending entries cannot be resolved before the next checkpoint, a WARNING level message is logged with more information about the pending entries.

The following stats indicate the size of the pending LN queues, how many LNs in the queue have been processed, and of those processed how many remain unresolved because the record is still write-locked.

getPendingLNQueueSize() "Number of LNs pending because they were locked."
getNPendingLNsProcessed() "Number of pending LNs that were re-processed."
getNPendingLNsLocked() "Number of pending LNs that were still locked."

If pending LNs remain unresolved, this could mean an application or JE bug has prevented a write-lock from being released. This could happen, for example, if the application fails to end a transaction or close a cursor. For such bugs, closing and re-opening the Environment is usually needed to allow file deletion to proceed. If this occurs for multiple files and is not resolved, it can eventually lead to an out-of-disk situation.

The following stats indicate the size of the pending DB queue, how many DBs in the queue have been processed, and of those processed how many remain unresolved because the removal/truncation is still incomplete.

getPendingDBQueueSize() "Number of DBs pending because DB removal/truncation was incomplete."
getNPendingDBsProcessed() "Number of pending DBs that were re-processed."
getNPendingDBsIncomplete() "Number of pending DBs for which DB removal/truncation was still incomplete."

If pending DBs remain unresolved, this may indicate that the asynchronous portion of DB removal/truncation is taking longer than expected. After a DB removal/truncation transaction is committed, JE asynchronously counts the data for the DB obsolete.

Cleaning Statistics: TTL and expired data

When the TTL feature is used, the obsolete portion of the log includes data that has expired. An expiration histogram is stored for each file and is used to compute the expired size. The current minimum and maximum utilization are the lower and upper bounds of computed utilization. They are different only when the TTL feature is used, and some data in the file has expired while other data has become obsolete for other reasons, such as record updates, record deletions or checkpoints. In this case the strictly obsolete size and the expired size may overlap because they are maintained separately.

If the two sizes overlap completely then the minimum utilization is correct, while if there is no overlap then the maximum utilization is correct. Both utilization values trigger cleaning, but when there is significant overlap, the cleaner will perform two-pass cleaning. The following stats indicate the use of two-pass cleaning:

getNCleanerTwoPassRuns() "Number of cleaner runs that resulted in two-pass runs."
getNCleanerRevisalRuns() "Number of potential cleaner runs that revised expiration info, but did result in any cleaning."

In the first pass of two-pass cleaning, the file is read to recompute obsolete and expired sizes, but the file is not cleaned. As a result of recomputing the expired sizes, the strictly obsolete and expired sizes will no longer overlap, and the minimum and maximum utilization will be equal. If the file should still be cleaned, based on the recomputed utilization, it is cleaned as usual, and in this case the number of two-pass runs is incremented.

If the file should not be cleaned because its recomputed utilization is higher than expected, the file will not be cleaned. Instead, its recomputed expiration histogram, which now has size information that does not overlap with the strictly obsolete data, is stored for future use. By storing the revised histogram, the cleaner can select the most appropriate files for cleaning in the future. In this case the number of revisal runs is incremented, and the number of total runs is not incremented.

Cleaning Statistics: Disk Space Management

The JE cleaner component is also responsible for checking and enforcing the EnvironmentConfig.MAX_DISK and EnvironmentConfig.FREE_DISK limits, and for protecting cleaned files from deletion while they are in use by replication, backups, etc. This process is described in the EnvironmentConfig.MAX_DISK javadoc. The stats related to disk space management are:

getActiveLogSize() "Bytes used by all active data files: files required for basic JE operation."
getAvailableLogSize() "Bytes available for write operations when unprotected reserved files are deleted: free space + reservedLogSize - protectedLogSize."
getReservedLogSize() "Bytes used by all reserved data files: files that have been cleaned and can be deleted if they are not protected."
getProtectedLogSize() "Bytes used by all protected data files: the subset of reserved files that are temporarily protected and cannot be deleted."
getProtectedLogSizeMap() "A breakdown of protectedLogSize as a map of protecting entity name to protected size in bytes."
getTotalLogSize() "Total bytes used by data files on disk: activeLogSize + reservedLogSize."
getNCleanerDeletions() "Number of cleaner file deletions."

The space taken by all data files, totalLogSize, is divided into categories according to these stats as illustrated below.

     /--------------------------------------------------\
 *     |                                                  |
     | Active files -- have not been cleaned            |
     |                 and cannot be deleted            |
     |                                                  |
     |             Utilization =                        |
     |    (utilized size) / (total active size)         |
     |                                                  |
     |--------------------------------------------------|
     |                                                  |
     | Reserved files -- have been cleaned and          |
     |                   can be deleted                 |
     |                                                  |
     | /----------------------------------------------\ |
     | |                                              | |
     | | Protected files -- temporarily in use by     | |
     | |                    replication, backups, etc.| |
     | |                                              | |
     | \----------------------------------------------/ |
     |                                                  |
     \--------------------------------------------------/
 

A key point is that reserved data files will be deleted by JE automatically to prevent violation of a disk limit, as long as the files are not protected. This has two important implications:

We strongly recommend using availableLogSize to monitor disk usage and take corrective action well before this value reaches zero. Monitoring the file system free space is not a substitute for this, since the data files include reserved files that will be deleted by JE automatically.

Applications should normally define a threshold for availableLogSize and raise an alert of some kind when the threshold is reached. When this happens applications may wish to free space (by deleting records, for example) or expand storage capacity. If JE write operations are needed as part of this procedure, corrective action must be taken while there is still enough space available to perform the write operations.

For example, to free space by deleting records requires enough space to log the deletions, and enough temporary space for the cleaner to reclaim space for the deleted records. As described in the sections above, the cleaner uses more disk space temporarily in order to migrate LNs, and a checkpoint must be performed before deleting the cleaned files.

How much available space is needed is application specific and testing may be required to determine the application's availableLogSize threshold. Note that the default EnvironmentConfig.FREE_DISK value, five GB, may or may not be large enough to perform the application's recovery procedure. The default FREE_DISK limit is intended to reserve space for recovery when application monitoring of availableLogSize fails and emergency measures must be taken.

If availableLogSize is unexpectedly low, it is possible that protected files are preventing space from being reclaimed. This could be due to replication, backups, etc. See getReservedLogSize() and getProtectedLogSizeMap() for more information.

It is also possible that data files cannot be deleted due to read-only processes. When one process opens a JE environment in read-write mode and one or more additional processes open the environment in read-only mode, the read-only processes will prevent the read-write process from deleting data files. For this reason, long running read-only processes are strongly discouraged in a production environment. When data file deletion is prevented for this reason, a SEVERE level message is logged with more information.

I/O Statistics

Group Name: "I/O"
Description: "The file I/O component of the append-only storage system includes data file access, buffering and group commit."

I/O Statistics: File Access

JE accesses data files (.jdb files) via Java's standard file system APIs. Because opening a file is relatively expensive, an LRU-based cache of open file handles is maintained. The stats below indicate how many cached file handles are currently open and how many open file operations have taken place.

getNOpenFiles() "Number of files currently open in the file cache."
getNFileOpens() "Number of times a log file has been opened."

To prevent expensive file open operations during record read operations, set EnvironmentConfig.LOG_FILE_CACHE_SIZE to the maximum number of data files expected in the Environment.

Note that JE may open the same file more than once. If a read operation in one thread is accessing a file via its cached handle and another thread attempts to read from the same file, a temporary handle is opened just for the duration of the read. The getNFileOpens() stat includes open operations for both cached file handles and temporary file handles. Therefore, this stat cannot be used to determine whether the file cache is too small.

When a file read is performed, it is always possible for the read buffer size to be smaller than the log entry being read. This is because JE's append-only log contains variable sized entries rather than pages. If the read buffer is too small to contain the entire entry, a repeat read with a larger buffer must be performed. These additional reads can be reduced by monitoring the following two stats and increasing the read buffer size as described below.

When Btree nodes are read at known file locations (by user API operations, for example), the following stat indicates the number of repeat reads:

getNRepeatFaultReads() "Number of times a log entry size exceeded the log fault read size."

When the number of getNRepeatFaultReads() is significant, consider increasing EnvironmentConfig.LOG_FAULT_READ_SIZE.

When data files are read sequentially (by the cleaner, for example) the following stat indicates the number of repeat reads:

getNRepeatIteratorReads() "Number of times a log entry size exceeded the log iterator max size."

When the number of getNRepeatIteratorReads() is significant, consider increasing EnvironmentConfig.LOG_ITERATOR_MAX_SIZE.

The two groups of stats below indicate JE file system reads and writes as number of operations and number of bytes. These stats are roughly divided into random and sequential operations by assuming that storage devices can optimize for sequential access if two consecutive operations are performed one MB or less apart in the same file. This categorization is approximate and may differ from the actual number depending on the type of disks and file system, disk geometry, and file system cache size.

The JE file read and write stats can sometimes be useful for debugging or for getting a rough idea of I/O characteristics. However, monitoring of system level I/O stats (e.g., using iostat) gives a more accurate picture of actual I/O since access via the buffer cache is not included. In addition the JE stats are not broken out by operation type and therefore don't add a lot of useful information to the system level I/O stats, other than the rough division of random and sequential I/O.

The JE file read stats are:

getNRandomReads() "Number of disk reads which required a seek of more than 1MB from the previous file position or were read from a different file."
getNRandomReadBytes() "Number of bytes read which required a seek of more than 1MB from the previous file position or were read from a different file."
getNSequentialReads() "Number of disk reads which did not require a seek of more than 1MB from the previous file position and were read from the same file."
getNSequentialReadBytes() "Number of bytes read which did not require a seek of more than 1MB from the previous file position and were read from the same file."

JE file read stats include file access resulting from the following operations. Because internal operations are included, it is not practical to correlate these stats directly to user operations.

The JE file write stats are:

getNRandomWrites() "Number of disk writes which required a seek of more than 1MB from the previous file position or were read from a different file."
getNRandomWriteBytes() "Number of bytes written which required a seek of more than 1MB from the previous file position or were read from a different file."
getNSequentialWrites() "Number of disk writes which did not require a seek of more than 1MB from the previous file position and were read from the same file."
getNSequentialWriteBytes() "Number of bytes written which did not require a seek of more than 1MB from the previous file position and were read from the same file."

JE file write stats include file access resulting from the following operations. As with the read stats, because internal operations are included it is not practical to correlate the write stats directly to user operations.

I/O Statistics: Logging Critical Section

JE uses an append-only storage system where each log entry is assigned an LSN (log sequence number). The LSN is a 64-bit integer consisting of two 32-bit parts: the file number is the high order 32-bits and the file offset is the low order 32-bits.

LSNs are used in the Btree to reference child nodes from their parent node. Therefore a node's LSN is assigned when the node is written, including the case where the write is buffered. The next LSN to be assigned is indicated by the following stat:

getEndOfLog() "The LSN of the next entry to be written to the log."

LSN assignment and assignment of log buffer space must be performed serially, and therefore these operations occur in a logging critical section. In general JE strives to do as little additional work as possible in the logging critical section. However, in certain cases additional operations are performed in the critical section and these generally impact performance negatively. These special cases will be noted in the sections that follow.

I/O Statistics: Log Buffers

A set of JE log buffers is used to buffer writes. When write operations use Durability.SyncPolicy.NO_SYNC, a file write is not performed until a log buffer is filled. This positively impacts performance by reducing the number of file writes. Note that checkpoint writes use NO_SYNC, so this benefits performance even when user operations do not use NO_SYNC.

(When Durability.SyncPolicy.SYNC or Durability.SyncPolicy.WRITE_NO_SYNC is used, the required file write and fsync are performed using a group commit mechanism, which is described further below.)

The size and number of log buffers is configured using EnvironmentConfig.LOG_BUFFER_SIZE, EnvironmentConfig.LOG_NUM_BUFFERS and EnvironmentConfig.LOG_TOTAL_BUFFER_BYTES. The resulting total size and number of buffers is indicated by the following stats:

getBufferBytes() "Total memory currently consumed by all log buffers, in bytes."
getNLogBuffers() "Number of log buffers."

The default buffer size (one MB) is expected to be optimal for most applications. In NoSQL DB, the default buffer size is used. However, if an individual entry (e.g., a BIN or LN) is larger than the buffer size, the log buffer mechanism is bypassed and this can negatively impact performance. When writing such an entry, the write occurs in the critical section using a temporary buffer, and any dirty log buffers all also written in the critical section. When this occurs it is indicated by the following stat:

getNTempBufferWrites() "Number of writes for entries larger than the log buffer size, forcing a write in the critical section."

When getNTempBufferWrites() is consistently non-zero, consider increasing the log buffer size.

The number of buffers also impacts write performance when many threads are performing write operations. The use of multiple buffers allows one writing thread to flush the completed dirty buffers while other writing threads add entries to "clean" buffers (that have already been written).

If many threads are adding to clean buffers while the completed dirty buffers are being written, it is possible that no more clean buffers will be available for adding entries. When this happens, the dirty buffers are flushed in the critical section, which can negatively impact performance. This is indicated by the following stat:

getNNoFreeBuffer() "Number of writes that could not obtain a free buffer, forcing a write in the critical section."

When getNNoFreeBuffer() is consistently non-zero, consider increasing the number of log buffers.

The number of log buffers also impacts read performance. JE read operations use the log buffers to read entries that were recently written. This occurs infrequently in the case of user read operations via the JE APIs, since recently written data is infrequently read and is often resident in the cache. However, it does occur frequently and is an important factor in the following cases:

Because of the last point above involving replication, in NoSQL DB the number of log buffers is set to 16. In general we recommend configuring 16 buffers or more for a replicated environment.

The following stats indicate the number of requests to read log entries by LSN, and the number that were not found in the log buffers.

getNNotResident() "Number of requests to read log entries by LSN."
getNCacheMiss() "Number of requests to read log entries by LSN that were not present in the log buffers."

In general these two stats are used only for internal JE debugging and are not useful to the application. This is because getNNotResident() is roughly the sum of the VLSNIndex nMisses replication stat and the cache fetch miss stats: getNLNsFetchMiss(), getNBINsFetchMiss(), getNFullBINsMiss() and getNUpperINsFetchMiss().

I/O Statistics: The Write Queue

JE performs special locking to prevent an fsync and a file write from executing concurrently. TODO: Why is this disallowed for all file systems?.

The write queue is a single, low-level buffer that reduces blocking due to a concurrent fsync and file write request. When a write of a dirty log buffer is needed to free a log buffer for a Durability.SyncPolicy.NO_SYNC operation (i.e., durability is not required), the write queue is used to hold the data temporarily and allow a log buffer to be freed.

Use of the write queue is strongly recommended since there is no known drawback to using it. It is enabled by default and in NoSQL DB. However, it can be disabled if desired by setting EnvironmentConfig.LOG_USE_WRITE_QUEUE to false.

The following stats indicate use of the write queue for satisfying file write and read requests. Note that when the write queue is enabled, all file read requests must check the write queue to avoid returning stale data.

getNWritesFromWriteQueue() "Number of file write operations executed from the pending write queue."
getNBytesWrittenFromWriteQueue() "Number of bytes written from the pending write queue."
getNReadsFromWriteQueue() "Number of file read operations which were fulfilled by reading out of the pending write queue."
getNBytesReadFromWriteQueue() "Number of bytes read to fulfill file read operations by reading out of the pending write queue."

The default size of the write queue (one MB) is expected to be adequate for most applications. Note that the write queue size should never be smaller than the log buffer size (which is also one MB by default). In NoSQL DB, the default sizes for the write queue and the log buffer are used.

However, when many NO_SYNC writes are requested during an fsync, some write requests may have to block until the fsync is complete. This is indicated by the following stats:

getNWriteQueueOverflow() "Number of write operations which would overflow the write queue."
getNWriteQueueOverflowFailures() "Number of write operations which would overflow the write queue and could not be queued."

When a NO_SYNC write request occurs during an fsync and the size of the write request's data is larger than the free space in the write queue, the getNWriteQueueOverflow() stat is incremented. When this stat is consistently non-zero, consider the following possible reasons and remedies:

When such a write queue overflow occurs, JE will wait for the fsync to complete, empty the write queue by writing it to the file, and attempt again to add the data to the write queue. If this fails again because there is still not enough free space in the write queue, then the getNWriteQueueOverflowFailures() stat is incremented. In this case the data is written to the file rather than adding it to the write queue, even though this may require waiting for an fsync to complete.

If getNWriteQueueOverflowFailures() is consistently non-zero, the possible causes are the same as those listed above, and the remedies described above should be applied.

I/O Statistics: Fsync and Group Commit

When Durability.SyncPolicy.SYNC or Durability.SyncPolicy.WRITE_NO_SYNC is used for transactional write operations, the required file write and fsync are performed using a group commit mechanism. In the presence of concurrent transactions, this mechanism often allows performing a single write and fsync for multiple transactions, while still ensuring that the write and fsync are performed before the transaction commit() method (or the put() or delete() operation method in this case of auto-commit) returns successfully.

First note that not all file write and fsync operations are due to user transaction commits, and not all fsyncs use the group commit mechanism.

The following stats describe all fsyncs performed by JE, whether or not the group commit mechanism is used.

getNLogFSyncs() "Number of fsyncs of the JE log."
getFSyncTime() "Number of milliseconds used to perform fsyncs."
getFSyncMaxTime() "Maximum number of milliseconds used to perform a single fsync."

Long fsync times often result in long transaction latencies. When this is indicated by the above stats, be sure to ensure that the linux page cache has been tuned to permit the OS to write asynchronously to disk whenever possible. For the NoSQL DB product this is described under Linux Page Cache Tuning. To aid in diagnosing long fsyncs, a WARNING level message is logged when the maximum fsync time exceeds EnvironmentConfig.LOG_FSYNC_TIME_LIMIT

The following stats indicate when group commit is requested for a write operation. Group commit requests include all user transactions with SYNC or WRITE_NO_SYNC durability, as well as the internal JE write operations that use group commit.

getNGroupCommitRequests() "Number of group commit requests."
getNFSyncRequests() "Number of group commit requests that include an fsync request."

All group commit requests result in a group commit operation that flushes all dirty log buffers and the write queue using a file write. In addition, requests using SYNC durability will cause the group commit operation to include an fsync.

Because group commit operations are performed serially, while a group commit is executing in one thread, one or more other threads may be waiting to perform a group commit. The group commit mechanism works by forming a group containing the waiting threads. When the prior group commit is finished, a single group commit is performed on behalf of the new group in one of this group's threads, which is called the leader. The other threads in the group are called waiters and they proceed only after the leader has finished the group commit.

If a waiter thread waits longer than EnvironmentConfig.LOG_FSYNC_TIMEOUT for the leader to finish the group commit operation, the waiter will remove itself from the group and perform a group commit operation independently. The number of such timeouts is indicated by the following stat:

getNFSyncTimeouts() "Number of group commit waiter threads that timed out."

The timeout is intended to prevent waiter threads from waiting indefinitely due to an unexpected problem. If getNFSyncTimeouts() is consistently non-zero and the application is performing normally in other respects, consider increasing EnvironmentConfig.LOG_FSYNC_TIMEOUT.

The following stat indicates the number of group commit operations that included an fsync. There is currently no stat available indicating the number of group commit operations that did not include an fsync.

getNFSyncs() "Number of group commit fsyncs completed."

Note that getNFSyncs() is a subset of the getNLogFSyncs() total that is described further above.

Node Compression Statistics

Group Name: "Node Compression"
Description: "Deleted records are removed from Btree internal nodes asynchronously and nodes are deleted when they become empty."

The following statistics are available. More information will be provided in a future release.

getSplitBins() "Number of BINs encountered by the INCompressor that were split between the time they were put on the compressor queue and when the compressor ran."
getDbClosedBins() "Number of BINs encountered by the INCompressor that had their database closed between the time they were put on the compressor queue and when the compressor ran."
getCursorsBins() "Number of BINs encountered by the INCompressor that had cursors referring to them when the compressor ran."
getNonEmptyBins() "Number of BINs encountered by the INCompressor that were not actually empty when the compressor ran."
getProcessedBins() "Number of BINs that were successfully processed by the INCompressor."
getInCompQueueSize() "Number of entries in the INCompressor queue."

Checkpoint Statistics

Group Name: "Checkpoints"
Description: "Dirty Btree internal nodes are written to the data log periodically to bound recovery time."

The following statistics are available. More information will be provided in a future release.

getNCheckpoints() "Number of checkpoints performed."
getLastCheckpointInterval() "Byte length from last checkpoint start to the previous checkpoint start."
getNFullINFlush() "Number of full INs flushed to the log."
getNFullBINFlush() "Number of full BINs flushed to the log."
getNDeltaINFlush() "Number of BIN-deltas flushed to the log."
getLastCheckpointId() "Id of the last checkpoint."
getLastCheckpointStart() "Location in the log of the last checkpoint start."
getLastCheckpointEnd() "Location in the log of the last checkpoint end."

Lock Statistics

Group Name: "Locks"
Description: "Record locking is used to provide transactional capabilities."

The following statistics are available. More information will be provided in a future release.

getNReadLocks() "Number of read locks currently held."
getNWriteLocks() "Number of write locks currently held."
getNOwners() "Number of lock owners in lock table."
getNRequests() "Number of times a lock request was made."
getNTotalLocks() "Number of locks currently held."
getNWaits() "Number of times a lock request blocked."
getNWaiters() "Number of threads waiting for a lock."

Operation Throughput Statistics

Group Name: "Op"
Description: "Throughput statistics for JE calls."

The following statistics are available. More information will be provided in a future release.

getPriSearchOps() "Number of successful primary DB key search operations."
getPriSearchFailOps() "Number of failed primary DB key search operations."
getSecSearchOps() "Number of successful secondary DB key search operations."
getSecSearchFailOps() "Number of failed secondary DB key search operations."
getPriPositionOps() "Number of successful primary DB position operations."
getSecPositionOps() "Number of successful secondary DB position operations."
getPriInsertOps() "Number of successful primary DB insertion operations."
getPriInsertFailOps() "Number of failed primary DB insertion operations."
getSecInsertOps() "Number of successful secondary DB insertion operations."
getPriUpdateOps() "Number of successful primary DB update operations."
getSecUpdateOps() "Number of successful secondary DB update operations."
getPriDeleteOps() "Number of successful primary DB deletion operations."
getPriDeleteFailOps() "Number of failed primary DB deletion operations."
getSecDeleteOps() "Number of successful secondary DB deletion operations."

Btree Operation Statistics

Group Name: "BtreeOp"
Description: "Btree internal operation statistics."

The following statistics are available. More information will be provided in a future release.

getRelatchesRequired() "Number of btree latch upgrades required while operating on this Environment. A measurement of contention."
getRootSplits() "Number of times a database btree root was split."
getNBinDeltaGetOps() "Number of gets performed in BIN-deltas"
getNBinDeltaInsertOps() "Number of insertions performed in BIN-deltas"
getNBinDeltaUpdateOps() "Number of updates performed in BIN-deltas"
getNBinDeltaDeleteOps() "Number of deletions performed in BIN-deltas"

Miscellaneous Environment-Wide Statistics

Group Name: "Environment"
Description: "Miscellaneous environment wide statistics."

The following statistics are available. More information will be provided in a future release.

getEnvironmentCreationTime() "System time when the Environment was opened. "

See Also:
Viewing Statistics with JConsole, Serialized Form
Skip navigation links
Berkeley DB Java Edition
version 18.1.11

Copyright (c) 2002, 2017 Oracle and/or its affiliates. All rights reserved.