
NeuroLag
A smart, resource-aware optimization plugin that dynamically adjusts Mob AI based on server TPS and RAM to ensure a lag-free SMP experience
NeuroLag 1.6.0
release29 апреля 2026 г.Нет описания изменений
NeuroLag 1.5.2
release23 апреля 2026 г.[1.5.2] — 2026-04-23 — Bug Fix & Safety Patch
Fixed — plugin.yml
- Critical: wrong
api-version—api-versionwas set to1.5.1(the plugin version) instead of the required Bukkit API version1.21. Paper rejects plugins with a non-standardapi-versionstring, causing the plugin to either fail to load or log a persistent warning on every server start. Changed toapi-version: 1.21.
Fixed — WebDashboard
- Token not generated when
monitors.ymlis deleted or web dashboard is disabled.ensureStrongToken()was only called after theif (!webDashboardEnabled) returnguard, meaning servers that had the dashboard disabled (or whosemonitors.ymlwas deleted and recreated bysaveResource) would still have the default placeholder token the first time the dashboard was enabled — a silent security hole.ensureStrongToken()now runs before the enabled check on everystart()call.- If
monitors.ymldoes not yet exist,saveResource()is called first to create the file structure before the generated token is written into it.
Fixed — LagEngine — Culling count wrong when protected zones are present
- Protected-zone mobs were counted toward
targetRemoveCountbut never actually removed, causing the cull pass to under-remove and leaving entity counts abovemaxEntitiesindefinitely on servers with active zone protection.cull()now pre-filters protected mobs from the candidate pool before computingtargetRemoveCount, so the removal math uses only the actually-cullable mob count.- The redundant zone check inside the removal loop is kept as a race-condition safety net.
Fixed — LagEngine — AI update scheduler saturation on very mob-dense worlds
- No upper bound on
runTaskLatercalls per tick — on worlds with thousands of entities,applyAiBatched()could schedule dozens of batch tasks in a single tick, queuing more work than the scheduler could drain, leading to compounding latency.- Added
AI_UPDATE_PER_TICK_CAP = 80: at most 80 mob AI updates are scheduled per engine tick. The next monitor tick processes the remaining mobs, spreading load evenly.
- Added
Fixed — StressTestManager — Server crash on large spawns at low-chunk-count locations
- No per-chunk mob density check — spawning 2 000+ mobs at or near the world spawn could
saturate loaded chunks and crash the server within seconds.
- Before spawning, the manager now scans a 7×7 chunk area around the target location and
compares the current entity count against
stress-test.max-mobs-per-chunk(default: 80). - If the limit would be exceeded, the spawn count is automatically reduced to the safe maximum and a warning is logged. If the area is already at capacity, the command is rejected with a descriptive error message.
- Before spawning, the manager now scans a 7×7 chunk area around the target location and
compares the current entity count against
- New config key in
systems.yml:stress-test.max-mobs-per-chunk: 80.
Fixed — MultiServerSync — MySQL reconnect attempt logged on every polling cycle
- When the MySQL database was down for an extended period,
ensureConnected()logged"MySQL connection lost — reconnecting in Xs…"on every poll interval (default every 10 s), flooding the console with hundreds of lines.- Reconnect log messages are now gated behind the same power-of-2 streak filter already used for SQL error warnings (logs on streak 1, 2, 4, 8, 16 …), reducing noise by up to 95% during prolonged outages while still keeping the first occurrence visible.
Config changes
# systems.yml — new in 1.5.2
stress-test:
max-mobs-per-chunk: 80 # NEW — per-chunk density safety cap for stress tests
NeuroLag 1.5.1
release20 апреля 2026 г.[1.5.1] — 2026-04-20 — Code Quality & Performance Patch
Fixed / Improved — ZoneManager
- WorldGuard region cache —
getApplicableRegions()was called for every mob on every monitor tick. Queries are now cached per chunk with a 100-tick TTL (ConcurrentHashMap). Cache is invalidated automatically when TTL expires, eliminating excessive WorldGuard API pressure on large servers. - CuboidZone coords changed from
doubletoint— block-level precision is sufficient; int arithmetic is faster and the record is more memory-efficient. - Zone
initialize()now clears the WG cache on reload.
Fixed / Improved — ProfileManager
- Profile validation before apply — switching to a profile where
critical-tps ≥ medium-tpsormax-entities < 1now returns a clear error instead of silently corrupting the engine state. - Active profile persisted across restarts — the selected profile name is written to
plugins/NeuroLag/active-profile.daton switch and reloaded automatically on startup/reload, so profiles survive server restarts without manual re-selection. switchProfile()return type changed frombooleantoString(null = success,"NOT_FOUND"or error message = failure) — NeuroLag main command updated accordingly.
Fixed / Improved — PredictiveScheduler
- Hand-rolled JSON parser replaced with Gson (bundled by Paper) — the previous
split/replaceAllparser was fragile and could silently produce wrong data on corrupt files. Gson provides safe serialisation and clean error handling; corrupt files now log a warning instead of producing garbage hourly averages. - Loaded samples are capped to the last 60 per hour on load, not only on record.
Fixed / Improved — CpuMonitor
- Graceful fallback for non-Sun JVMs — if
com.sun.management.OperatingSystemMXBeanis unavailable (GraalVM, some container JVMs), the monitor now falls back toOperatingSystemMXBean.getSystemLoadAverage()normalised by CPU count. If even that is unavailable, the feature disables itself gracefully instead of throwing at construction time. - EMA smoothing (α = 0.3) — single-tick CPU spikes no longer toggle the throttle on/off erratically. The exponential moving average keeps the reading stable under transient load.
- Strategy (SUN_PROCESS / SYSTEM_LOAD / DISABLED) is selected once at construction and
stored in an enum — no repeated
instanceofchecks every tick.
Fixed / Improved — RegionOptimizer
- Player chunk position cache —
refresh()now tracks each player's last known chunk. The HOT/COLD region map is only rebuilt when at least one player has moved to a different chunk since the previous call. On a stable server this eliminates the full-player-scan every monitor tick. isBeyondPathfindingDistance()uses the cached positions instead of callingworld.getPlayers()a second time per mob check.
Fixed / Improved — NeuroLagAPI
- Added
NeuroLagAPI.getInstance()for cleaner third-party plugin integration. - JSON payload sent via plugin message channel now uses Gson instead of manual StringBuilder, eliminating potential escape bugs.
Fixed / Improved — LagReporter (Discord)
- Retry with exponential back-off — Discord webhook requests are retried up to 3 times (delays: 1 s → 2 s → 4 s) before giving up. Transient network errors and Discord 429 rate-limit responses no longer silently drop notifications.
- Discord embed payload now built with Gson — no more manual string escaping.
Fixed / Improved — StressTestManager
- Confirmation prompt for spawns > 1 000 mobs — the sender must repeat the command within 30 seconds to confirm. Prevents accidental large spawns.
- Multiple entity types —
stress-test.entity-typesconfig list (default: ZOMBIE, SKELETON, CREEPER) is cycled round-robin across spawned mobs, producing a more realistic mixed load. Invalid or non-spawnable type names log a warning and are skipped.
Fixed / Improved — LagEngine
- Added internal processing metrics:
lastTickMobCountandlastTickProcessingMs. Visible in/nlag status("Last Tick: N mobs, X ms"). ZoneManager.tick()is now called once per engine tick to advance the WG cache TTL counter.
Fixed / Improved — WebDashboard
- Rate limiting now also covers
GET /(the HTML dashboard page), not only/api/*endpoints. - Token auto-generation log output upgraded to
SEVERElevel and formatted as a clearly visible bordered block so admins cannot miss the new token in the console.
Fixed / Improved — ConfigManager / ConfigValidator
- Added
stress-test.entity-typeslist field. ConfigValidatornow checksentity-typeslist is non-empty when stress test is enabled.
Config changes (systems.yml)
stress-test:
entity-types: # NEW — 1.5.1
- ZOMBIE
- SKELETON
- CREEPER
NeuroLag 1.5.0
release19 апреля 2026 г.[1.5.0] — 2026-04-19 — Security Hardening & Stability Release
Fixed — Security
- [Bug #1] Web dashboard weak default token / no rate limiting / no IP filtering
(
WebDashboard.java,ConfigManager.java,monitors.yml)- Auto-generates a cryptographically strong 24-byte random token on first startup
whenever the default placeholder
"change-this-token-now"is detected. The new token is immediately persisted tomonitors.ymland printed to the console. - Added per-IP rate limiting (sliding 60-second window, configurable via
web-dashboard.rate-limit.max-requests-per-minute, default 60). Returns HTTP 429 when the limit is exceeded. - Added optional IP allow-list (
web-dashboard.ip-whitelist, disabled by default). When enabled, only explicitly listed IPs can reach/api/statusand/api/cmd.
- Auto-generates a cryptographically strong 24-byte random token on first startup
whenever the default placeholder
Fixed — Stability
-
[Bug #2] Stress test could spawn unlimited mobs with no cooldown (
StressTestManager.java,ConfigManager.java,systems.yml)- Hard cap: mob count is clamped to
stress-test.max-mob-count(default 2 000) regardless of the value passed to/nlag stresstest. The sender is notified when the cap applies. - Cooldown: a configurable
stress-test.cooldown-seconds(default 300) must elapse between tests. Attempting to start a test during cooldown shows the remaining time. - Both limits are enforced by
ConfigValidator.
- Hard cap: mob count is clamped to
-
[Bug #3] Redis subscriber join(3000) always timed out on slow networks (
MultiServerSync.java)stop()now holds avolatilereference to the activeJedisPubSubinstance and callspubSub.unsubscribe()beforeinterrupt()+join(). This signals the blockingjedis.subscribe()call to return immediately, sojoin()completes in milliseconds rather than timing out. Prevents connection-pool leaks during rapid/nlag reloadcycles.
-
[Bug #4] Smart culling could remove hundreds of entities in one pass → lag spike (
LagEngine.java)- Added
CULL_PER_TICK_CAP = 50— at most 50 entities are removed per cull invocation. If the entity count remains abovemax-entities-per-worldafter one pass, the next monitor tick handles the remainder. Eliminates the single-frame mob-removal lag spike seen on servers with thousands of entities.
- Added
-
[Bug #5] Predictive scheduler lost all historical data on server restart (
PredictiveScheduler.java)- Hourly TPS samples are now saved to
plugins/NeuroLag/predictive-data.jsononstop()(server shutdown or/nlag reload) and reloaded onstart(). No external library required — uses a compact hand-rolled JSON serializer/parser. The predictor now accumulates knowledge across restarts and becomes effective much faster on busy servers.
- Hourly TPS samples are now saved to
-
[Bug #6] Full plugin reload caused noticeable server stutter (
NeuroLag.java)reloadPluginState()now takes config snapshots before reloading and selectively restarts only the services whose configuration sections actually changed (web dashboard, multi-server sync, predictive scheduler, CPU monitor, boss-bar dashboard). The main monitor task and engine throttle are always restarted cleanly. On a typical/nlag reloadthat only changes TPS thresholds, zero external services are bounced, eliminating the stutter caused by restarting Redis/MySQL connections unnecessarily.
Changed
ConfigValidatornow validates the new stress-test limits and rate-limit settings.systems.yml— addedstress-test.max-mob-countandstress-test.cooldown-seconds.monitors.yml— addedweb-dashboard.rate-limitandweb-dashboard.ip-whitelistsections.
NeuroLag 1.4.1
release18 апреля 2026 г.[1.4.1] — 2026-04-18 — Bug Fix Release
Fixed — Critical
-
Task leak on reload (
/nlag reload) —BackupManager.start()was called twice inonEnable()(once directly, once viastartRuntimeServices()), causing every scheduled task to be registered a second time on reload. The duplicate direct call has been removed; the initial backup is now triggered asynchronously insidestart()when no backups exist.stopRuntimeServices()now also callsengine.cancelAllPendingBatchTasks()before stopping the throttle task, ensuring all in-flightrunTaskLaterbatch tasks are cancelled before any new tasks are registered. (LagEngine,NeuroLag) -
collectTargets()running every tick on the main thread —LagEngine.collectTargets()calledworld.getLivingEntities()on every monitor tick, causing significant lag spikes on servers with large mob populations. The result is now cached per-world and rebuilt at most once every 4 ticks (~0.2 s). The cache is invalidated onrestoreAll()and on world state transitions. (LagEngine) -
FOLLOW_RANGE not fully restored for protected-zone mobs —
applyAiBatched()setAI = truefor protected mobs but did not always restoreFOLLOW_RANGEto its default value, so mobs could permanently retain a reduced follow range after leaving a zone. The fix unconditionally callsattr.setBaseValue(attr.getDefaultValue())for every protected mob encountered. Additionally, a newrestoreWorld(World)helper is called whenever a world transitions back toNORMAL, guaranteeing a full attribute restore even for mobs that were never re-processed by the batch scheduler. (LagEngine) -
Manual override (
/nlag toggle) not restoring mobs correctly — in-flightrunTaskLaterbatch tasks scheduled byapplyAiBatched()could execute afterrestoreAll()and re-apply AI restrictions or reduced follow ranges, leaving mobs frozen.handleToggle()now callsengine.cancelAllPendingBatchTasks()beforeengine.restoreAll()so no pending task survives the toggle. (LagEngine,NeuroLag)
Fixed — High
-
WebDashboard query-string token auth — passing
?token=...in the URL is insecure (tokens appear in server access logs, browser history, and HTTP referrer headers). The query-string fallback is now disabled by default. A new config keyweb-dashboard.auth.allow-query-token(defaultfalse) must be explicitly set totrueto re-enable it. The existing warning is preserved and now also states the config key needed to disable the fallback permanently. (WebDashboard,ConfigManager,monitors.yml) -
Redis subscriber thread not shutting down gracefully —
MultiServerSync.stop()calledredisSubThread.interrupt()but returned immediately, so the thread could still be holding a Jedis resource whenjedisPool.close()ran, causing pool-exhaustion exceptions in the server log on every reload. The fix addsredisSubThread.join(3000)afterinterrupt(), giving the thread up to 3 seconds to exit cleanly before the pool is closed. (MultiServerSync) -
MySQL reconnect logic throwing repeatedly when DB is down —
ensureConnected()calledDriverManager.getConnection()immediately on every sync cycle failure, flooding the log with stack traces. The fix introduces an atomic error-streak counter and an exponential back-off (sleep time =min(streak * 2, 30)seconds before reconnect). Log output now uses a power-of-two gating strategy (log on streak 1, 2, 4, 8, 16, …) to avoid noise during prolonged outages. (MultiServerSync) -
Batched AI task delay not capped — with 5 000+ mobs the delay between batches could reach 100 ticks (5 seconds), making mobs visibly unresponsive. The delay is now capped at 20 ticks (1 second) regardless of entity count. (
LagEngine) -
Smart culling could remove protected-zone mobs — the culling loop checked
zoneManager.isProtected()only in the main optimization path, not insidecull()itself. Mobs inside protected zones can now never be removed by smart culling. (LagEngine) -
BackupManager not backing up the
lang/directory — the ZIP bundle only included root-level.ymlfiles. Custom language files stored inplugins/NeuroLag/lang/would be lost on restore.collectConfigFiles()now recursively includeslang/*.ymlentries andrestoreZip()correctly recreates thelang/subdirectory on restore. (BackupManager)
