The most recent version is
1.6.4. The minimum recommended version is
1.6.3. The minimum supported version is
1.6.2.
Clients running a recent version (1.6.3 or newer) have a 50% bonus to range priority promotions.
1.6.4 - 2024-12-13
This is an optional update. It has some improvements to remove unused files from old caches, prevent corrupted files and reduce the latency of connecting to the client, but no important bug fixes or required compatibility additions.
- If the filesize of a cached file does not match the expected size, we now ignore it and perform a backend fetch instead.
- To prevent long-term bitrot, we now occasionally verify the integrity of requested files as they are being served. This will check a particular file no more often than once per week, and it will check no more than one file every two seconds. This should cause no additional I/O or RAM usage, and should have negligble impact on CPU usage.
- CPU-starved clients can disable this verification checking by starting the client with --disable-file-verification. Note however that if the monitoring system detects corrupted files in your cache, your client will be flagged for a full cache verification on next startup, which can take a long time, so it is recommended to leave it enabled unless the client is actually CPU-starved.
- Partially because of the new file integrity checking, the LRU cache table is now created even if --use-less-memory is used. This will increase the memory requirements in this mode by about 2 MB, but reduce disk I/O.
- Fixed an issue where if a directory chosen for cache pruning did not exist or was inaccessible (due to a file system or permission issue), the pruning mechanism would loop trying to prune said directory.
- If the cached number of static ranges is higher than the number of static ranges returned by the server during startup, we now force a cache rescan to prevent files in removed ranges from clogging up the cache.
- If a static range was removed, the range directory is now deleted on the first cache rescan. Previously it would delete the files, but leave the directory until the next rescan.
- Re-enabled TLS 1.3, which among other things reduces the latency for establishing a HTTPS connection to the client. It was originally disabled due to a significantly higher failure rate compared to TLS 1.2 caused by broken proxies, firewalls and other network filtering devices, but since TLS 1.3 usage is widespread at this point, this should no longer be a problem.
- TLS 1.0 and 1.1 were disabled as they are deprecated and insecure, with support being [
techcommunity.microsoft.com]
dropped from modern operating systems. Everything that supports the root cert of the current certificate authority more than likely supports TLS 1.2.
1.6.3 - 2024-05-12
This is a recommended update. It may become a required update in the future.
- Added experimental proxy support for backend image server requests. This allows you to use a SOCKS (v4 or v5) or HTTP proxy if connectivity to the image servers is unreliable, which is mostly relevant in regions with heavy internet censorship.
This adds three arguments that can be passed on startup:
--image-proxy-host=<host> - hostname or IP address for the proxy
--image-proxy-type=<type> - can be "socks" or "http". defaults to "socks" if not provided
--image-proxy-port=<port> - the port of the proxy. defaults to 1080 for SOCKS and 8080 for HTTP if not provided
It does not support proxies that require authentication.
While it will technically work to use Tor as a SOCKS proxy, this should be avoided as Tor is too slow for this purpose.
- The currently selected RPC server will no longer reset when the server list is refreshed, as long as the current server is still in the list. This should make the --rpc-server-ip argument more useful.
- Improved reliability of reading HTTP request headers with some browser/locale combinations.
- Corrected a potential resource exhaustion issue.
1.6.2 - 2023-09-14
This is now a required update due to lack of WebP support on previous versions.
- Fixed an issue on some setups where, when running a test against other clients, two competing threads could reach a lock in an unexpected order, which would make the client report a failure before it actually ran the test.
- Fixed an issue where, if the cache was moved without preserving file modification dates and the cache is full, the cache pruner could get stuck in a loop of not finding any files to prune.
- In very rare cases, when loading persistent cache data on startup, the internal Java object deserializer could get stuck in an infinite loop if the files had been corrupted by some software or hardware issue, which required manually deleting them. We now automatically delete those files on the next startup if this happens.
- Added a way for the server to tell a client to shut down in case of persistent network configuration issues.
- Added some MIME types for possible future use.
1.6.1 - 2020-08-12
- A sanity check for certificate expiry has been added. If a client does not successfully refresh the certificate for whatever reason, it will now shut down gracefully 24 hours before the certificate actually expires instead of failing silently.
- If the system time is off by more than 24 hours, a warning advising you to correct it will now be printed regularly. Failure to do so might make the certificate check trigger prematurely or fail to trigger at all.
- During proxy requests, when a file is requested but not found in cache, the backend will now provide an alternative source link in addition to the primary one. If the client is unable to connect to the primary source, it will automatically fall back to the secondary one.
- H@H now makes a best-effort attempt to include filesystem overhead for slack space in its cache size calculations. By default this calculation assumes a filesystem block size of 4kB which is by far the most common, adding an estimated overhead of 2 GB per 1 million files - roughly 0.5% for your average H@H cache.
Note that if your cache was at ~100%, this means it will be slighly over and prune this amount from the cache at the next startup. If you already left some extra space, you can increase the H@H cache by this amount BEFORE starting up to compensate. If you abort startup after seeing the pruning message, it will have to rescan the cache at next startup.This does
not reduce the amount of static ranges you can store at a given setting; it just adds 2kB (or blocksize/2) average overhead per file for the internal resource tracking.
You can specify a different blocksize with --filesystem-blocksize=xxx or turn this off by using --filesystem-blocksize=0
1.6.0 - 2020-01-02
Compared to 1.4.2, this incorporates all changes in the 1.5 experimental branch. You can find those release notes
here.
- When refreshing HTTPS certs, the client will now wait longer (up to five minutes) before it attempts starting the server back up if the listening thread takes unexpectedly long to terminate.
- When handling file requests, cache misses did not count towards the "total files sent" stat. (This only affected the readout in the GUI, server-side stats are not calculated by the clients.)
To update an existing client: shut it down, download Hentai@Home 1.6.4, extract the archive, copy the jar files over the existing ones, then restart the client.The full source code for H@H is available and licensed under the GNU General Public License v3, and can be downloaded here. Building it from source only requires OpenJDK 8 or newer.For information on how to join Hentai@Home, check out The Hentai@Home Project FAQ.