Error
Error Code:
23
MySQL Error 23: Out of File Resources
Description
Error 23, 'Out of resources when opening file', indicates that the MySQL server or the underlying operating system has exhausted its available file descriptors or other file-related resources. This typically occurs when MySQL attempts to open a new file (e.g., a table, log, or temporary file) but is prevented by system or server-specific limits.
Error Message
Out of resources when opening file '%s' (OS errno %d - %s)
Known Causes
3 known causesOperating System File Limits
The underlying operating system has reached its maximum allowed number of open file descriptors for the MySQL process or the entire system.
MySQL `open_files_limit` Too Low
The MySQL server's configured `open_files_limit` variable is insufficient for the current number of tables, logs, or temporary files it needs to access.
High Number of Tables/Connections
A large number of tables, partitions, or active client connections can quickly consume available file descriptors, leading to resource exhaustion.
Solutions
3 solutions available1. Increase Open File Limit for MySQL Process medium
Temporarily or permanently increase the maximum number of open files allowed for the MySQL server process.
1
Identify the MySQL user. This is usually 'mysql' or the user under which the MySQL daemon runs.
2
Check the current open file limit for the MySQL user. On Linux, this is often done with `ulimit -n` for the current shell, but for a running service, you need to check the system's configuration.
sudo su - mysql -c 'ulimit -n'
3
Edit the system's limits configuration file. The exact file can vary by distribution (e.g., `/etc/security/limits.conf` or files within `/etc/security/limits.d/`). Add or modify lines to set a higher limit.
sudo nano /etc/security/limits.conf
4
Add or modify these lines, replacing `mysql` with your MySQL user if different, and `65536` with a suitable higher value (e.g., 65536 or more).
mysql soft nofile 65536
mysql hard nofile 65536
5
If you are using a systemd-based system, you might also need to configure the limit within the MySQL service unit file.
sudo systemctl edit mysql.service
6
Add the following to the `[Service]` section. This is a more targeted approach for systemd.
[Service]
LimitNOFILE=65536
LimitNOFILESoft=65536
7
Reload the systemd daemon and restart the MySQL service for the changes to take effect.
sudo systemctl daemon-reload
sudo systemctl restart mysql
2. Reduce the Number of Open Files medium
Identify and close unnecessary open files or connections that are consuming file handles.
1
Check for processes or connections that might be holding many file handles open. This could include: open tables, temporary files, log files, or client connections.
2
Use `SHOW OPEN FILES;` in MySQL to see which files are currently open by the server.
SHOW OPEN FILES;
3
Examine the output of `SHOW OPEN FILES;`. If you see an unexpectedly large number of the same file type (e.g., `.ibd` files for InnoDB), it might indicate an issue with table caching or fragmentation. If you see many log files, ensure log rotation is functioning correctly.
4
On Linux, you can use `lsof` to see open files for the MySQL process. Replace `mysql_pid` with the actual process ID of the MySQL server.
sudo lsof -p mysql_pid | wc -l
5
If specific client connections are consuming excessive resources, consider optimizing queries or disconnecting idle clients.
KILL connection_id;
6
Ensure that any temporary tables or files generated by MySQL are being properly managed and cleaned up. This is often handled automatically, but persistent issues might point to a MySQL configuration or bug.
3. Restart MySQL Server easy
A quick restart can often clear temporary file handle issues.
1
Restart the MySQL service. This is a temporary solution that can help if the issue is due to a resource leak or a temporary condition.
sudo systemctl restart mysql
2
Alternatively, for older systems or different init systems, use the appropriate command.
sudo service mysql restart