-
Notifications
You must be signed in to change notification settings - Fork 15
test: add a systems integration test project + api daemon tests #1830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…demonService - Updated nodemon.json to remove unnecessary watch entry. - Adjusted NODEMON_CONFIG_PATH and UNRAID_API_CWD paths for better structure. - Enhanced error handling in isUnraidApiRunning and start methods of NodemonService to ensure proper logging and resource management. - Added tests for error scenarios in NodemonService to ensure robustness.
- Introduced UNRAID_API_ROOT to streamline path definitions for nodemon. - Replaced direct usage of import.meta.dirname with UNRAID_API_ROOT in NODEMON_PATH and UNRAID_API_CWD for improved clarity and maintainability. - Added dirname import to facilitate the new path structure.
… handling - Introduced a new test file to validate nodemon path configurations, ensuring they anchor to the package root by default. - Enhanced the NodemonService to throw an error when nodemon exits immediately after starting, improving robustness. - Added tests to cover scenarios where nodemon fails to start, ensuring proper logging and resource cleanup.
- Updated the return type of killSpy in nodemon.service.spec.ts to directly return a boolean instead of casting it, improving code clarity and maintainability.
- Implemented stopPm2IfRunning method to stop any running pm2-managed instances of unraid-api before starting nodemon. - Added findMatchingNodemonPids method to identify existing nodemon processes, improving resource management. - Updated start method to handle scenarios where a stored pid is running or dead, ensuring proper cleanup and logging. - Introduced new unit tests to validate the new functionality and ensure robustness in process handling.
- Implemented findDirectMainPids and terminatePids methods to identify and terminate existing unraid-api processes before starting nodemon. - Enhanced the start method to include checks for running processes, ensuring proper cleanup and logging. - Updated unit tests to validate the new process management functionality, improving overall robustness.
…n NodemonService - Added waitForNodemonExit method to ensure nodemon processes are fully terminated before restarting. - Updated restart method to call waitForNodemonExit, improving process management during restarts. - Introduced a new unit test to validate the behavior of the restart method, ensuring proper sequence of stop, wait, and start operations.
…tarts - Updated the start method to restart nodemon if a recorded pid is already running, ensuring proper cleanup and logging. - Modified the restart method to delegate to start, streamlining the process management logic. - Enhanced unit tests to validate the new behavior, including scenarios for cleaning up stray processes and ensuring fresh starts.
- Modified the log stream mock to use a file descriptor instead of pipe methods, aligning with the actual implementation in NodemonService. - Removed unnecessary stdout and stderr pipe mocks from unit tests, simplifying the test setup while maintaining functionality. - Ensured consistency between the test and implementation for improved clarity and maintainability.
…ution - Introduced a new integration test file for NodemonService to validate the start and stop functionality of the real nodemon process. - Implemented setup and teardown logic to create temporary files and directories for testing. - Enhanced logging and error handling in the tests to ensure proper verification of nodemon's behavior during execution.
…onService - Introduced PATHS_NODEMON_LOG_FILE to configure the log file for nodemon, allowing for better log management. - Updated log stream handling in NodemonService to write to the specified nodemon log file. - Enhanced integration tests to validate logging behavior and ensure proper file creation for both application and nodemon logs.
- Updated API version in api.json from 4.25.3 to 4.27.2. - Refactored pubsub channel references across multiple files to use GRAPHQL_PUBSUB_CHANNEL instead of PUBSUB_CHANNEL, enhancing consistency and clarity in the codebase. - Adjusted related tests to ensure they align with the new pubsub channel structure.
…adient based on CSS variable - Updated UserProfile component to load banner gradient from a CSS variable, allowing for dynamic styling. - Added tests to verify banner rendering behavior based on the presence of the CSS variable, ensuring correct functionality regardless of theme store settings. - Removed outdated test cases that relied solely on theme store flags for banner gradient rendering.
WalkthroughThis pull request performs a major migration from PM2 to Nodemon for managing the Unraid API daemon. Changes include removing PM2 configuration and service, adding Nodemon configuration and management service, updating all CLI commands to use the new service, migrating pubsub channel constants across resolvers, updating GraphQL schema types from pm2 to nodemon, adding system integration tests, and enhancing boot-time logging throughout the application stack. Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI as CLI Command
participant NodemonService
participant NodemonProcess as Nodemon Process
participant UnraidAPI as Unraid API
participant PIDFile as PID File
User->>CLI: run start command
CLI->>NodemonService: start(env)
NodemonService->>NodemonService: validate paths
NodemonService->>NodemonService: ensure dependencies
NodemonService->>NodemonService: acquire lock
NodemonService->>NodemonProcess: spawn detached
NodemonProcess-->>UnraidAPI: watch & restart
NodemonService->>PIDFile: write PID
NodemonService->>NodemonService: verify process alive
NodemonService-->>CLI: ✓ started
CLI-->>User: success message
User->>CLI: run stop command
CLI->>NodemonService: stop({ force? })
NodemonService->>PIDFile: read PID
NodemonService->>NodemonProcess: send SIGTERM
NodemonProcess-->>UnraidAPI: graceful shutdown
NodemonService->>NodemonService: wait for exit (timeout)
alt Force Required
NodemonService->>NodemonProcess: kill -9
end
NodemonService->>PIDFile: delete
NodemonService-->>CLI: ✓ stopped
CLI-->>User: success message
User->>CLI: run status command
CLI->>NodemonService: status()
NodemonService->>PIDFile: read PID
NodemonService->>NodemonService: check PID running (kill -0)
NodemonService-->>CLI: boolean
CLI-->>User: display status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## codex/switch-api-process-management-to-nodemon #1830 +/- ##
==================================================================================
+ Coverage 52.39% 52.45% +0.05%
==================================================================================
Files 873 873
Lines 50522 50684 +162
Branches 5091 5108 +17
==================================================================================
+ Hits 26473 26585 +112
- Misses 23974 24024 +50
Partials 75 75 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
This plugin has been deployed to Cloudflare R2 and is available for testing. |
f6e1f05 to
0d7bee2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
api/src/unraid-api/graph/resolvers/metrics/metrics.resolver.ts (1)
40-69: MaketotalPowerconsistent with the per-packagepowersentinel values
Right nowtotalPoweris computed frompkg.powerdirectly (with JS coercions likenull >= 0), whilepackages.powerusespkg.power ?? -1. Consider normalizing once and reusing.- const totalPower = Number( - packageList - .map((pkg) => pkg.power) - .filter((power) => power >= 0) - .reduce((sum, power) => sum + power, 0) - .toFixed(2) - ); + const normalizedPower = packageList.map((pkg) => pkg.power ?? -1); + const totalPower = Number( + normalizedPower + .filter((power) => power >= 0) + .reduce((sum, power) => sum + power, 0) + .toFixed(2) + ); ... const packages: CpuPackages = { id: 'metrics/cpu/packages', totalPower, - power: packageList.map((pkg) => pkg.power ?? -1), + power: normalizedPower, temp: packageList.map((pkg) => pkg.temp ?? -1), };api/src/unraid-api/cli/logs.command.ts (1)
15-24: Harden--linesparsing (radix + clamp to positive)parseLines(input: string): number { - const parsedValue = parseInt(input); - return Number.isNaN(parsedValue) ? 100 : parsedValue; + const parsedValue = Number.parseInt(input, 10); + if (Number.isNaN(parsedValue) || parsedValue <= 0) return 100; + return parsedValue; }
🧹 Nitpick comments (26)
api/src/__test__/graphql/resolvers/rclone-api.service.test.ts (1)
23-23: Replaceanytypes with proper typing.The coding guidelines specify to never use the
anytype and always prefer proper typing. Several instances violate this:
- Line 23:
response?: anyin the HTTPError mock class- Lines 80-83: Mock variables typed as
any- Lines 113, 465, 517: Cast expressions using
as anyApply these improvements:
For the HTTPError mock (line 23):
- response?: any; + response?: { statusCode: number; body: string };For mock variable declarations (lines 80-83):
- let mockGotPost: any; - let mockExeca: any; - let mockPRetry: any; - let mockExistsSync: any; + let mockGotPost: ReturnType<typeof vi.fn>; + let mockExeca: ReturnType<typeof vi.fn>; + let mockPRetry: ReturnType<typeof vi.fn>; + let mockExistsSync: ReturnType<typeof vi.fn>;For mock return values and error objects (lines 113, 465, 517):
Use more specific types based on what the mocks represent, or create type definitions that match the expected structure rather than casting to
any.As per coding guidelines.
Also applies to: 80-83, 113-113, 465-465, 517-517
web/__test__/components/UserProfile.test.ts (1)
361-404: Test correctly validates CSS-driven banner rendering.The test properly verifies that the banner renders based on CSS variable, independent of theme store flags. Good use of isolated Pinia instance and proper cleanup.
The selector
'div.absolute.z-0'at line 397 could be more specific to avoid false matches. Consider using more of the banner's distinctive classes:- const bannerEl = localWrapper.find('div.absolute.z-0'); + const bannerEl = localWrapper.find('div.pointer-events-none.absolute.inset-y-0.right-0.left-0.z-0');api/src/store/listeners/array-event-listener.ts (1)
15-38: Makepubsub.publishusage consistent + verifygetArrayDataaccepts state getters.If
publish()returns a promise in your PubSub impl, consider awaiting (or explicitlyvoid-ing) both calls:- pubsub.publish(GRAPHQL_PUBSUB_CHANNEL.ARRAY, { array }); + await pubsub.publish(GRAPHQL_PUBSUB_CHANNEL.ARRAY, { array }); logger.debug({ event: array }, 'Array was updated, publishing event');Also please double-check
getArrayData’s signature supportsgetArrayData(getState)/getArrayData(getOriginalState)(getter functions) rather than requiringgetArrayData(getState())/getArrayData(getOriginalState()).tests/system-integration/.prettierrc.cjs (1)
1-4: Optional: drop the header comment to align with “avoid comments” guideline.api/src/__test__/core/utils/process/unraid-api-running.integration.test.ts (1)
35-53: Consider adding a test case for stale PID files.The current test coverage includes missing files and invalid (non-numeric) PIDs, but doesn't test the scenario where the PID file contains a valid number that doesn't correspond to a running process. This is a realistic edge case (e.g., after system restart without cleanup).
Add a test case like:
+ it('returns false when pid file contains a stale pid', async () => { + // Use a PID that's unlikely to exist (very high number) + writeFileSync(pidPath, '999999'); + const isUnraidApiRunning = await loadIsRunning(); + + expect(await isUnraidApiRunning()).toBe(false); + });api/src/cli.ts (1)
11-11: Consider sourcing BOOT_LOG_PATH from environment configuration.The boot log path is hardcoded here but similar paths (like NODEMON_PID_PATH) are defined in
api/src/environment.ts. For consistency and configurability, consider importing this from the environment module.tests/system-integration/eslint.config.ts (1)
14-15: Reconsider disabling TypeScript strict typing rules.The coding guidelines specify: "Never use the
anytype. Always prefer proper typing" and "Avoid using casting whenever possible, prefer proper typing from the start." Disabling@typescript-eslint/no-explicit-anyin test files may lead to poor typing practices spreading through the test suite.Consider keeping these rules enabled to maintain type safety, or at least disable them selectively on a per-line basis where absolutely necessary.
tests/system-integration/src/helpers/ssh.ts (1)
1-18: Extensive documentation is helpful but consider the coding guidelines.The coding guidelines state: "Never add comments unless they are needed for clarity of function" and "Never add comments for obvious things." While this JSDoc is well-written and helpful, consider whether all of it is necessary. The function signatures and types are already fairly self-documenting.
That said, since this is a new helper module that other developers will use, the documentation provides valuable context, especially the examples.
api/src/unraid-api/cli/report.command.ts (1)
36-40: Tighten error logging to preserve stack / avoid[object Object](Line 36-40).
Consider formattingerraserr instanceof Error ? err.stack ?? err.message : JSON.stringify(err)(orString(err)) rather than concatenation.api/src/unraid-api/graph/resolvers/info/versions/versions.resolver.ts (1)
6-6: Avoid sync disk read per resolver call + prefer Nest Logger overconsole.error(Line 6-6, 38-46).
Ifpackages()is queried frequently, consider cachinggetPackageJson()results (module-level memoization or a small service), and switch error logging tonew Logger(VersionsResolver.name)for consistency.Also applies to: 38-46
api/src/unraid-api/graph/resolvers/metrics/metrics.resolver.integration.spec.ts (1)
125-142: Avoid casting toany.Line 141 casts the return value to
any, which violates the coding guideline to never use theanytype.Consider defining a proper type or interface for the memory load return value:
+interface MemoryLoadResult { + id: string; + total: number; + used: number; + free: number; + available: number; + active: number; + buffcache: number; + percentTotal: number; + swapTotal: number; + swapUsed: number; + swapFree: number; + percentSwapTotal: number; +} + vi.spyOn(memoryService, 'generateMemoryLoad').mockImplementation(async () => { executionCount++; await new Promise((resolve) => setTimeout(resolve, 50)); return { id: 'memory-utilization', total: 16000000000, used: 8000000000, free: 8000000000, available: 8000000000, active: 4000000000, buffcache: 2000000000, percentTotal: 50, swapTotal: 0, swapUsed: 0, swapFree: 0, percentSwapTotal: 0, - } as any; + } as MemoryLoadResult; });As per coding guidelines.
api/src/core/log.ts (1)
18-21: Use imported constant for consistency.Line 19 directly accesses
process.env.SUPPRESS_LOGSinstead of using the importedSUPPRESS_LOGSconstant. This is inconsistent with line 24 which uses the imported constant.Apply this diff:
export const logDestination = - process.env.SUPPRESS_LOGS === 'true' + SUPPRESS_LOGS ? nullDestination : pino.destination({ dest: PATHS_LOGS_FILE, mkdir: true });api/src/unraid-api/graph/resolvers/metrics/metrics.resolver.ts (2)
31-38: Consider awaitingpubsub.publish(...)to surface publish failures
Ifpublish()is async (common for graphql-subscriptions), not awaiting it can hide errors.- pubsub.publish(GRAPHQL_PUBSUB_CHANNEL.CPU_UTILIZATION, { systemMetricsCpu: payload }); + await pubsub.publish(GRAPHQL_PUBSUB_CHANNEL.CPU_UTILIZATION, { systemMetricsCpu: payload }); ... - pubsub.publish(GRAPHQL_PUBSUB_CHANNEL.MEMORY_UTILIZATION, { + await pubsub.publish(GRAPHQL_PUBSUB_CHANNEL.MEMORY_UTILIZATION, { systemMetricsMemory: payload, });Also applies to: 73-82
60-68: Drop or gate the duplicateCPU_TELEMETRYdebug logs
Two JSON stringifications every 5s is noisy; if you keep one, consider logging onlytotalPower+ counts.api/src/unraid-api/cli/restart.command.ts (1)
30-35: Avoid passingLOG_LEVEL: undefinedinto NodemonService- const env = { LOG_LEVEL: options.logLevel?.toUpperCase() }; + const env = { LOG_LEVEL: (options.logLevel ?? LOG_LEVEL.toLowerCase()).toUpperCase() }; await this.nodemon.restart({ env });web/composables/gql/graphql.ts (1)
10-25: Avoidanyin generated scalars via Codegen config (don’t hand-edit output)
Consider mappingJSON/BigIntscalars tounknown/string/numberin the GraphQL CodeGen config so this file can stay compliant without manual changes.tests/system-integration/src/helpers/utils.ts (1)
1-19: Optional: usetimers/promisesforsleep(and consider trimming the JSDoc noise)+import { setTimeout as sleep } from 'node:timers/promises'; + export const ONE_SECOND = 1000; ... -export function sleep(ms: number): Promise<void> { - return new Promise((resolve) => setTimeout(resolve, ms)); -} +export { sleep };api/src/unraid-api/cli/start.command.ts (1)
20-23: Makerun()resilient: default options + guaranteedLOG_LEVEL- async run(_: string[], options: LogLevelOptions): Promise<void> { + async run(_: string[], options: LogLevelOptions = {}): Promise<void> { this.logger.info('Starting the Unraid API'); - await this.nodemon.start({ env: { LOG_LEVEL: options.logLevel?.toUpperCase() } }); + await this.nodemon.start({ + env: { LOG_LEVEL: (options.logLevel ?? LOG_LEVEL.toLowerCase()).toUpperCase() }, + }); }api/src/unraid-api/cli/stop.command.ts (1)
5-27: StopCommand nodemon migration looks good
Only nit: consider adding an explicit: Promise<void>return type onrun()for consistency with other commands.tests/system-integration/src/helpers/server.ts (1)
1-15: Handle “SSH drops immediately after reboot” explicitly (and align docs).
rebootServer()docs say it “waits briefly for the connection to drop”, but it only runsreboot. Either adjust the comment or add a small delay / rely on the offline waiter.web/src/composables/gql/graphql.ts (1)
1548-1550:PackageVersions.nodemonexposure is fine; ensure frontend removed anypm2assumptions.Please confirm any UI usage of
packages.pm2(if it existed) has been removed/updated topackages.nodemon.tests/system-integration/src/tests/singleton-daemon.test.ts (1)
47-68: Reduce fixed sleeps to lower flake risk (poll with existing helpers).There are multiple
setTimeoutwaits (2s/3s/1s). Prefer waiting onwaitForStart(...)/ other polling so slower boxes don’t intermittently fail.Also applies to: 144-176, 178-206
tests/system-integration/src/helpers/api-lifecycle.ts (1)
1-23: Trim the large file header JSDoc (optional). The examples are nice, but they’re quite verbose for test helpers and will go stale quickly.tests/system-integration/src/helpers/process.ts (1)
98-124: Reduce accidental matches for main.js detection. Greppingnode.*dist/main\.jscan match unrelated services on systems that run other Node apps with similar entrypoints; if possible, include a more specific path discriminator (e.g., full UNRAID_API_CWD/dist/main.js) like the NestNodemonServicedoes.api/src/unraid-api/cli/nodemon.service.ts (2)
362-378: Avoid theas Record<string, string>env cast; let spawn acceptNodeJS.ProcessEnv. The cast can hideundefinedvalues and defeats type checking. Consider typingenvasNodeJS.ProcessEnvand only materializing required keys.
98-126: Explicitly typeJSON.parse(stdout)asunknowninstead of relying on implicitany. While the array check at line 117 provides runtime safety, explicitly declaring the type asunknownimproves clarity and makes the validation intent explicit. This aligns with the codebase pattern seen indocker-php.service.tsandrclone-api.service.ts, where validation happens after parsing tounknown.
| { | ||
| "watch": [ | ||
| "dist/main.js" | ||
| ], | ||
| "ignore": [ | ||
| "node_modules", | ||
| "src", | ||
| ".env.*" | ||
| ], | ||
| "exec": "node $UNRAID_API_SERVER_ENTRYPOINT", | ||
| "signal": "SIGTERM", | ||
| "ext": "js,json", | ||
| "restartable": "rs", | ||
| "env": { | ||
| "NODE_ENV": "production" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
nodemon exec environment variable expansion $VAR shell
💡 Result:
Short answer: nodemon’s --exec runs your command directly (no shell), so shell $VAR expansion won’t happen. Run a shell explicitly (sh -c or bash -c) or put the env on the same command line.
Examples:
-
Force shell expansion:
nodemon --exec "sh -c 'node server.js --opt="$MYVAR"'" -
Set var for the command:
MYVAR=foo nodemon --exec "node server.js"
References: Node child_process (spawn vs exec) and common patterns showing sh -c and nodemon in containers. [1][2][3]
Sources
- Node.js child_process docs (spawn vs exec and shell behavior). [turn3search7]
- FreeCodeCamp article – exec vs spawn and shell syntax. [turn3search0]
- Example using sh -c with nodemon in Docker / run commands. [turn2search2]
🏁 Script executed:
# Check the nodemon version used in the project
cd api && cat package.json | grep -A 5 -B 5 nodemon || echo "nodemon not found in api/package.json"Repository: unraid/api
Length of output: 1091
🏁 Script executed:
# Check if there's a main project package.json
cat package.json | grep -A 5 -B 5 nodemon || echo "nodemon not found in root package.json"Repository: unraid/api
Length of output: 92
🏁 Script executed:
# Look for any documentation or configuration examples in the repo
find . -type f \( -name "*.md" -o -name "*.txt" \) -exec grep -l "nodemon\|UNRAID_API_SERVER_ENTRYPOINT" {} \; 2>/dev/null | head -20Repository: unraid/api
Length of output: 90
🏁 Script executed:
# Double-check: Look at the actual nodemon.json file being reviewed
cat api/nodemon.jsonRepository: unraid/api
Length of output: 355
Fix nodemon exec to use shell wrapper for env-var expansion (Line 11).
The exec command "node $UNRAID_API_SERVER_ENTRYPOINT" will not expand the environment variable because nodemon spawns the command directly without a shell. Change to: "exec": "sh -c 'node $UNRAID_API_SERVER_ENTRYPOINT'" to ensure the variable is properly resolved at runtime.
🤖 Prompt for AI Agents
In api/nodemon.json around lines 1 to 17, the exec value currently uses "node
$UNRAID_API_SERVER_ENTRYPOINT" which won't expand the environment variable
because nodemon executes commands without a shell; update the exec to run via a
shell wrapper so env-vars expand by changing it to use sh -c 'node
$UNRAID_API_SERVER_ENTRYPOINT' (ensure proper quoting) so the environment
variable is resolved at runtime.
| export const isUnraidApiRunning = async (): Promise<boolean> => { | ||
| try { | ||
| if (!(await fileExists(NODEMON_PID_PATH))) { | ||
| return false; | ||
| } | ||
|
|
||
| const pidText = (await readFile(NODEMON_PID_PATH, 'utf-8')).trim(); | ||
| const pid = Number.parseInt(pidText, 10); | ||
| if (Number.isNaN(pid)) { | ||
| return false; | ||
| } | ||
|
|
||
| process.kill(pid, 0); | ||
| return true; | ||
| } catch { | ||
| return false; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Treat EPERM as “running” (and reject non-positive PIDs).
export const isUnraidApiRunning = async (): Promise<boolean> => {
- try {
+ try {
if (!(await fileExists(NODEMON_PID_PATH))) {
return false;
}
const pidText = (await readFile(NODEMON_PID_PATH, 'utf-8')).trim();
const pid = Number.parseInt(pidText, 10);
- if (Number.isNaN(pid)) {
+ if (Number.isNaN(pid) || pid <= 0) {
return false;
}
- process.kill(pid, 0);
- return true;
- } catch {
- return false;
+ try {
+ process.kill(pid, 0);
+ return true;
+ } catch (err: unknown) {
+ // EPERM => process exists but we can't signal it
+ if (err && typeof err === 'object' && 'code' in err && err.code === 'EPERM') {
+ return true;
+ }
+ return false;
+ }
+ } catch {
+ return false;
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const isUnraidApiRunning = async (): Promise<boolean> => { | |
| try { | |
| if (!(await fileExists(NODEMON_PID_PATH))) { | |
| return false; | |
| } | |
| const pidText = (await readFile(NODEMON_PID_PATH, 'utf-8')).trim(); | |
| const pid = Number.parseInt(pidText, 10); | |
| if (Number.isNaN(pid)) { | |
| return false; | |
| } | |
| process.kill(pid, 0); | |
| return true; | |
| } catch { | |
| return false; | |
| } | |
| export const isUnraidApiRunning = async (): Promise<boolean> => { | |
| try { | |
| if (!(await fileExists(NODEMON_PID_PATH))) { | |
| return false; | |
| } | |
| const pidText = (await readFile(NODEMON_PID_PATH, 'utf-8')).trim(); | |
| const pid = Number.parseInt(pidText, 10); | |
| if (Number.isNaN(pid) || pid <= 0) { | |
| return false; | |
| } | |
| try { | |
| process.kill(pid, 0); | |
| return true; | |
| } catch (err: unknown) { | |
| // EPERM => process exists but we can't signal it | |
| if (err && typeof err === 'object' && 'code' in err && err.code === 'EPERM') { | |
| return true; | |
| } | |
| return false; | |
| } | |
| } catch { | |
| return false; | |
| } | |
| }; |
| // Test with API running but status check returns true | ||
| mockIsUnraidApiRunning.mockResolvedValue(true); | ||
| await reportCommand.report(); | ||
| expect(mockApiReportService.generateReport).toHaveBeenCalledWith(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarify test comment - appears misleading.
The comment on line 153 says "Test with API running but status check returns true", but mockIsUnraidApiRunning.mockResolvedValue(true) means the API IS running and the status check returns true. This seems redundant with the earlier test at line 147 that also tests with the API running.
Was this intended to test a different scenario? If so, the mock value should be different. If not, consider removing this redundant test case or clarifying what unique behavior it's verifying.
🤖 Prompt for AI Agents
In api/src/unraid-api/cli/__test__/report.command.test.ts around lines 153 to
156, the inline comment is misleading and the test is redundant with the earlier
one: either remove or update this test to reflect a distinct scenario. If you
intended to test the case where the API is NOT running (status check false)
change mockIsUnraidApiRunning.mockResolvedValue(true) to
mockResolvedValue(false) and update the expectation to
expect(mockApiReportService.generateReport).toHaveBeenCalledWith(false) and
correct the comment; otherwise delete this duplicate test or reword the comment
to clearly state it is verifying the same behavior as the earlier test.
| const createLogStreamMock = (fd = 42, autoOpen = true) => { | ||
| const listeners: Record<string, Array<(...args: any[]) => void>> = {}; | ||
| const stream: any = { | ||
| fd, | ||
| close: vi.fn(), | ||
| destroy: vi.fn(), | ||
| write: vi.fn(), | ||
| once: vi.fn(), | ||
| off: vi.fn(), | ||
| }; | ||
|
|
||
| stream.once.mockImplementation((event: string, cb: (...args: any[]) => void) => { | ||
| listeners[event] = listeners[event] ?? []; | ||
| listeners[event].push(cb); | ||
| if (event === 'open' && autoOpen) cb(); | ||
| return stream; | ||
| }); | ||
| stream.off.mockImplementation((event: string, cb: (...args: any[]) => void) => { | ||
| listeners[event] = (listeners[event] ?? []).filter((fn) => fn !== cb); | ||
| return stream; | ||
| }); | ||
| stream.emit = (event: string, ...args: any[]) => { | ||
| (listeners[event] ?? []).forEach((fn) => fn(...args)); | ||
| }; | ||
|
|
||
| return stream as ReturnType<typeof createWriteStream> & { | ||
| emit: (event: string, ...args: any[]) => void; | ||
| }; | ||
| }; | ||
|
|
||
| const createSpawnMock = (pid?: number) => { | ||
| const unref = vi.fn(); | ||
| return { | ||
| pid, | ||
| unref, | ||
| } as unknown as ReturnType<typeof spawn>; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove any usages in mocks/helpers (repo rule: no any, even in tests).
-const createLogStreamMock = (fd = 42, autoOpen = true) => {
- const listeners: Record<string, Array<(...args: any[]) => void>> = {};
- const stream: any = {
+type Listener = (...args: unknown[]) => void;
+type MockStream = Pick<ReturnType<typeof createWriteStream>, 'close' | 'destroy' | 'write' | 'once' | 'off'> & {
+ fd: number;
+ emit: (event: string, ...args: unknown[]) => void;
+};
+
+const createLogStreamMock = (fd = 42, autoOpen = true): MockStream => {
+ const listeners: Record<string, Listener[]> = {};
+ const stream = {
fd,
close: vi.fn(),
destroy: vi.fn(),
write: vi.fn(),
once: vi.fn(),
off: vi.fn(),
- };
+ } as unknown as MockStream;
- stream.once.mockImplementation((event: string, cb: (...args: any[]) => void) => {
+ vi.mocked(stream.once).mockImplementation((event: string, cb: Listener) => {
listeners[event] = listeners[event] ?? [];
listeners[event].push(cb);
if (event === 'open' && autoOpen) cb();
return stream;
});
- stream.off.mockImplementation((event: string, cb: (...args: any[]) => void) => {
+ vi.mocked(stream.off).mockImplementation((event: string, cb: Listener) => {
listeners[event] = (listeners[event] ?? []).filter((fn) => fn !== cb);
return stream;
});
stream.emit = (event: string, ...args: any[]) => {
(listeners[event] ?? []).forEach((fn) => fn(...args));
};
- return stream as ReturnType<typeof createWriteStream> & {
- emit: (event: string, ...args: any[]) => void;
- };
+ return stream;
};(You’ll also want to replace the remaining ...args: any[] in emit with unknown[] in the final version.)
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In api/src/unraid-api/cli/nodemon.service.spec.ts around lines 11 to 47, remove
all usages of `any` in the mock helpers: change the listeners type to
Record<string, Array<(...args: unknown[]) => void>> and make the callback
signatures use unknown[] instead of any[]; type `stream` more precisely (e.g.
use a cast to ReturnType<typeof createWriteStream> & { emit: (event: string,
...args: unknown[]) => void } or a Partial/explicit interface) so its methods
(close, destroy, write, once, off) keep proper function/mocked types instead of
any; update the implementations of once/off to use the new callback types and
update stream.emit to accept unknown[]; and for createSpawnMock, replace the any
cast with a typed return (e.g. cast to ReturnType<typeof spawn> or a small
explicit type { pid?: number; unref: () => void }) so no `any` remains.
| it('throws error when directory creation fails', async () => { | ||
| const service = new NodemonService(logger); | ||
| const error = new Error('Permission denied'); | ||
| mockMkdir.mockRejectedValue(error); | ||
|
|
||
| await expect(service.ensureNodemonDependencies()).rejects.toThrow('Permission denied'); | ||
| expect(mockMkdir).toHaveBeenCalledWith('/var/log/unraid-api', { recursive: true }); | ||
| }); | ||
|
|
||
| it('starts nodemon and writes pid file', async () => { | ||
| const service = new NodemonService(logger); | ||
| const spawnMock = createSpawnMock(123); | ||
| vi.mocked(spawn).mockReturnValue(spawnMock); | ||
| killSpy.mockReturnValue(true); | ||
| findMatchingSpy.mockResolvedValue([]); | ||
|
|
||
| await service.start({ env: { LOG_LEVEL: 'DEBUG' } }); | ||
|
|
||
| expect(stopPm2Spy).toHaveBeenCalled(); | ||
| expect(spawn).toHaveBeenCalledWith( | ||
| process.execPath, | ||
| ['/usr/bin/nodemon', '--config', '/etc/unraid-api/nodemon.json', '--quiet'], | ||
| { | ||
| cwd: '/usr/local/unraid-api', | ||
| env: expect.objectContaining({ LOG_LEVEL: 'DEBUG' }), | ||
| detached: true, | ||
| stdio: ['ignore', 42, 42], | ||
| } | ||
| ); | ||
| expect(openSync).toHaveBeenCalledWith('/var/log/unraid-api/nodemon.log', 'a'); | ||
| expect(spawnMock.unref).toHaveBeenCalled(); | ||
| expect(mockWriteFile).toHaveBeenCalledWith('/var/run/unraid-api/nodemon.pid', '123'); | ||
| expect(logger.info).toHaveBeenCalledWith('Started nodemon (pid 123)'); | ||
| }); | ||
|
|
||
| it('throws error and aborts start when directory creation fails', async () => { | ||
| const service = new NodemonService(logger); | ||
| const error = new Error('Permission denied'); | ||
| mockMkdir.mockRejectedValue(error); | ||
|
|
||
| await expect(service.start()).rejects.toThrow('Permission denied'); | ||
| expect(logger.error).toHaveBeenCalledWith( | ||
| 'Failed to ensure nodemon dependencies: Permission denied' | ||
| ); | ||
| expect(spawn).not.toHaveBeenCalled(); | ||
| }); | ||
|
|
||
| it('throws error when spawn fails', async () => { | ||
| const service = new NodemonService(logger); | ||
| const error = new Error('Command not found'); | ||
| vi.mocked(spawn).mockImplementation(() => { | ||
| throw error; | ||
| }); | ||
|
|
||
| await expect(service.start()).rejects.toThrow('Failed to start nodemon: Command not found'); | ||
| expect(mockWriteFile).not.toHaveBeenCalledWith( | ||
| '/var/run/unraid-api/nodemon.pid', | ||
| expect.anything() | ||
| ); | ||
| expect(logger.info).not.toHaveBeenCalled(); | ||
| }); | ||
|
|
||
| it('throws a clear error when the log file cannot be opened', async () => { | ||
| const service = new NodemonService(logger); | ||
| const openError = new Error('EACCES: permission denied'); | ||
| vi.mocked(openSync).mockImplementation(() => { | ||
| throw openError; | ||
| }); | ||
|
|
||
| await expect(service.start()).rejects.toThrow( | ||
| 'Failed to start nodemon: EACCES: permission denied' | ||
| ); | ||
| expect(spawn).not.toHaveBeenCalled(); | ||
| }); | ||
|
|
||
| it('throws error when pid is missing', async () => { | ||
| const service = new NodemonService(logger); | ||
| const spawnMock = createSpawnMock(undefined); | ||
| vi.mocked(spawn).mockReturnValue(spawnMock); | ||
|
|
||
| await expect(service.start()).rejects.toThrow( | ||
| 'Failed to start nodemon: process spawned but no PID was assigned' | ||
| ); | ||
| expect(mockWriteFile).not.toHaveBeenCalledWith( | ||
| '/var/run/unraid-api/nodemon.pid', | ||
| expect.anything() | ||
| ); | ||
| expect(logger.info).not.toHaveBeenCalled(); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t assert exact thrown error messages; use .rejects.toThrow() (repo test rule).
Several tests do await expect(...).rejects.toThrow('...'). This is explicitly discouraged; prefer validating via side-effects (logger calls, no spawn/writeFile, etc.) and just assert that it throws.
- await expect(service.ensureNodemonDependencies()).rejects.toThrow('Permission denied');
+ await expect(service.ensureNodemonDependencies()).rejects.toThrow();Apply the same pattern to the other rejects.toThrow('...') cases in this file, and (if you still need to check messaging) prefer expect(logger.error).toHaveBeenCalledWith(expect.stringContaining('...')).
As per coding guidelines, avoid testing exact error wording unless the wording/format is the requirement.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In api/src/unraid-api/cli/nodemon.service.spec.ts around lines 148 to 236,
multiple tests assert exact thrown error messages using await
expect(...).rejects.toThrow('...'), which violates the repo test rule; change
those assertions to only assert that the promise rejects (e.g., await
expect(...).rejects.toThrow()) and verify failure via side-effects instead
(logger.error called with expect.stringContaining('...'), spawn/writeFile not
called, etc.), replacing exact message checks in all affected tests and
optionally adding
expect(logger.error).toHaveBeenCalledWith(expect.stringContaining('...')) where
message content still needs validation.
| export async function countNodemonProcesses(): Promise<number> { | ||
| const result = await remoteExecSafe( | ||
| "ps -eo pid,args 2>/dev/null | grep -E 'nodemon.*nodemon.json' | grep -v grep | wc -l" | ||
| ); | ||
| const count = parseInt(result.stdout.trim(), 10); | ||
| return isNaN(count) ? 0 : count; | ||
| } | ||
|
|
||
| /** | ||
| * Counts the number of main.js worker processes running. | ||
| * Looks for processes matching the pattern `node.*dist/main.js`. | ||
| * | ||
| * @returns The number of main.js processes (should be 0 or 1 in normal operation) | ||
| * | ||
| * @example | ||
| * ```typescript | ||
| * const count = await countMainProcesses(); | ||
| * expect(count).toBe(1); // Exactly one worker should run | ||
| * ``` | ||
| */ | ||
| export async function countMainProcesses(): Promise<number> { | ||
| const result = await remoteExecSafe( | ||
| "ps -eo args 2>/dev/null | grep -E 'node.*dist/main\\.js' | grep -v grep | wc -l" | ||
| ); | ||
| const count = parseInt(result.stdout.trim(), 10); | ||
| return isNaN(count) ? 0 : count; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid remoteExecSafe() in counting/assertions (can create false-green tests). If SSH/execa throws, remoteExecSafe() currently returns { exitCode: 0, stdout: '' }, so counts become 0 and assertNoApiProcesses() can pass even when the remote host is unreachable. Prefer remoteExec() and explicitly fail on non-zero exitCode (or change remoteExecSafe() semantics globally if intended).
export async function countNodemonProcesses(): Promise<number> {
- const result = await remoteExecSafe(
+ const result = await remoteExec(
"ps -eo pid,args 2>/dev/null | grep -E 'nodemon.*nodemon.json' | grep -v grep | wc -l"
);
+ if (result.exitCode !== 0) throw new Error(`Failed to count nodemon processes: ${result.stderr}`);
const count = parseInt(result.stdout.trim(), 10);
return isNaN(count) ? 0 : count;
}
@@
export async function countMainProcesses(): Promise<number> {
- const result = await remoteExecSafe(
+ const result = await remoteExec(
"ps -eo args 2>/dev/null | grep -E 'node.*dist/main\\.js' | grep -v grep | wc -l"
);
+ if (result.exitCode !== 0) throw new Error(`Failed to count main.js processes: ${result.stderr}`);
const count = parseInt(result.stdout.trim(), 10);
return isNaN(count) ? 0 : count;
}Also applies to: 156-173, 187-199
🤖 Prompt for AI Agents
In tests/system-integration/src/helpers/process.ts around lines 98 to 124 (and
similarly at 156-173 and 187-199), the functions use remoteExecSafe which
swallows SSH/execa errors by returning exitCode 0 and empty stdout, potentially
producing false-green counts; change these to use remoteExec instead, check the
returned exitCode and throw or rethrow when non-zero before parsing stdout, so
failed remote commands cause the test to fail rather than return a 0 count.
| export async function waitForServerOffline(timeout = FIVE_MINUTES): Promise<boolean> { | ||
| const deadline = Date.now() + timeout; | ||
| const pollInterval = FIVE_SECONDS; | ||
|
|
||
| while (Date.now() < deadline) { | ||
| const result = await remoteExec('echo online'); | ||
| if (result.exitCode !== 0 || !result.stdout.includes('online')) { | ||
| return true; | ||
| } | ||
| await sleep(pollInterval); | ||
| } | ||
|
|
||
| return false; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
waitForServerOffline() should treat SSH exceptions as “offline” (currently can throw and fail tests).
Right now, if remoteExec('echo online') throws during the reboot transition, the function will throw instead of returning true.
export async function waitForServerOffline(timeout = FIVE_MINUTES): Promise<boolean> {
const deadline = Date.now() + timeout;
const pollInterval = FIVE_SECONDS;
while (Date.now() < deadline) {
- const result = await remoteExec('echo online');
- if (result.exitCode !== 0 || !result.stdout.includes('online')) {
- return true;
- }
+ try {
+ const result = await remoteExec('echo online');
+ if (result.exitCode !== 0 || !result.stdout.includes('online')) return true;
+ } catch {
+ // SSH connection failed -> treat as offline
+ return true;
+ }
await sleep(pollInterval);
}
return false;
}🤖 Prompt for AI Agents
In tests/system-integration/src/helpers/server.ts around lines 24 to 37,
waitForServerOffline currently lets exceptions from remoteExec propagate; wrap
the remoteExec call in a try/catch and treat any thrown error as an offline
condition by returning true. Preserve the existing checks for non-zero exitCode
and missing 'online' stdout, and keep the sleep/polling behavior unchanged.
| describe('singleton daemon', () => { | ||
| beforeAll(async () => { | ||
| if (!process.env.SERVER) { | ||
| throw new Error('SERVER environment variable must be set'); | ||
| } | ||
| }); | ||
|
|
||
| afterAll(async () => { | ||
| await cleanup(); | ||
| await startApi(); | ||
| }); | ||
|
|
||
| beforeEach(async () => { | ||
| await cleanup(); | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skip (don’t fail) when SERVER isn’t configured, to avoid accidental CI/dev breakage.
Throwing in beforeAll will hard-fail the suite if it’s executed unintentionally.
-describe('singleton daemon', () => {
- beforeAll(async () => {
- if (!process.env.SERVER) {
- throw new Error('SERVER environment variable must be set');
- }
- });
+const systemDescribe = process.env.SERVER ? describe : describe.skip;
+
+systemDescribe('singleton daemon', () => {(Then you can remove the beforeAll env check entirely.)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| describe('singleton daemon', () => { | |
| beforeAll(async () => { | |
| if (!process.env.SERVER) { | |
| throw new Error('SERVER environment variable must be set'); | |
| } | |
| }); | |
| afterAll(async () => { | |
| await cleanup(); | |
| await startApi(); | |
| }); | |
| beforeEach(async () => { | |
| await cleanup(); | |
| }); | |
| const systemDescribe = process.env.SERVER ? describe : describe.skip; | |
| systemDescribe('singleton daemon', () => { | |
| afterAll(async () => { | |
| await cleanup(); | |
| await startApi(); | |
| }); | |
| beforeEach(async () => { | |
| await cleanup(); | |
| }); | |
🤖 Prompt for AI Agents
In tests/system-integration/src/tests/singleton-daemon.test.ts around lines
16–31, the current beforeAll throws when process.env.SERVER is missing which
hard-fails the suite; replace that pattern by skipping the whole suite instead:
remove the beforeAll check and wrap the describe with a conditional describe
that uses describe when SERVER is set and describe.skip when it’s not (e.g.,
const describeIfServer = process.env.SERVER ? describe : describe.skip; then use
describeIfServer('singleton daemon', ...)), ensuring the suite is skipped (not
failed) when SERVER isn’t configured.
| "types": ["vitest/globals"] | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add Node types (current types likely disables them).
- "types": ["vitest/globals"]
+ "types": ["node", "vitest/globals"]🤖 Prompt for AI Agents
In tests/system-integration/tsconfig.json around lines 14–15 the "types" array
currently only includes "vitest/globals", which disables Node built-in types;
update that "types" entry to include Node (for example "types": ["node",
"vitest/globals"]) so TypeScript sees Node globals and typings used in tests, or
remove/adjust the types field if you prefer default type inclusion.
| export * from "./fragment-masking"; | ||
| export * from "./gql"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use .js extensions in relative exports to match ESM rule.
-export * from "./fragment-masking";
-export * from "./gql";
+export * from "./fragment-masking.js";
+export * from "./gql.js";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export * from "./fragment-masking"; | |
| export * from "./gql"; | |
| export * from "./fragment-masking.js"; | |
| export * from "./gql.js"; |
🤖 Prompt for AI Agents
In web/src/composables/gql/index.ts lines 1-2, the relative export paths
currently omit file extensions which violates the ESM rule; change the two
export statements to include the .js extension (e.g., export * from
"./fragment-masking.js"; export * from "./gql.js";) so the module resolver uses
explicit ESM paths and build/runtime will resolve correctly.
|
@pujitm - if you rebase this (drop my commits) and then get this ready, I think we can merge this. |
Motivation
I found myself using ssh or the Unraid console to test the
unraid-apicli and other OS functionality regularly. It became hard to keep track of all test cases & edge conditions to consider.Originally, I used BATS (the bash testing framework), but after weighing pros and cons, decided that a typescript based test suite would be more suitable for contributors to this project, rather than a bash-native one.
Here's a run from my potato server (the Zima board):
Summary by CodeRabbit
Release Notes
New Features
Process Management
Bug Fixes & Improvements
Documentation
Version
✏️ Tip: You can customize this high-level summary in your review settings.