Writers
Logger writers are responsible for handling how and where log messages are output. In Hive Logger, writers are pluggable components that receive structured log data and determine its final destination and format. This allows you to easily customize logging behavior, such as printing logs to the console, writing them as JSON, storing them in memory for testing, or sending them to external systems.
By default, Hive Logger provides several built-in writers, but you can also implement your own to suit your application's needs. The built-ins are:
MemoryLogWriter
Writes the logs to memory allowing you to access the logs. Mostly useful for testing.
import { Logger, MemoryLogWriter } from "@graphql-hive/logger";
const writer = new MemoryLogWriter();
const log = new Logger({ writers: [writer] });
log.info({ my: "attrs" }, "Hello World!");
console.log(writer.logs);Outputs:
[ { level: 'info', msg: 'Hello World!', attrs: { my: 'attrs' } } ]ConsoleLogWriter (default)
The default log writer used by the Hive Logger. It outputs log messages to the console in a human-friendly, colorized format, making it easy to distinguish log levels and read structured attributes. Each log entry includes a timestamp, the log level (with color), the message, and any additional attributes (with colored keys), which are pretty-printed and formatted for clarity.
The writer works in both Node.js and browser-like environments, automatically disabling colors if
not supported. This makes ConsoleLogWriter ideal for all cases, providing clear and readable logs
out of the box.
import { ConsoleLogWriter, Logger } from "@graphql-hive/logger";
const writer = new ConsoleLogWriter({
noColor: true, // defaults to env.NO_COLOR. read more: https://no-color.org/
noTimestamp: true,
});
const log = new Logger({ writers: [writer] });
log.info({ my: "attrs" }, "Hello World!");Outputs:
INF Hello World!
my: "attrs"Disabling Colors
You can disable colors in the console output by setting the NO_COLOR=1 environment variable. All
environments that need the logger to not color the output will automatically set this following the
NO_COLOR convention.
JSONLogWriter
Will be used then the LOG_JSON=1 environment variable is provided.
Built-in log writer that outputs each log entry as a structured JSON object. When used, it prints logs to the console in JSON format, including all provided attributes, the log level, message, and a timestamp.
In the JSONLogWriter implementation, any attributes you provide with the keys msg, timestamp, or
level will be overwritten in the final log output. This is because the writer explicitly sets
these fields when constructing the log object. If you include these keys in your attributes, their
values will be replaced by the logger's own values in the JSON output.
If the LOG_JSON_PRETTY=1 environment variable is provided, the output will be pretty-printed for
readability; otherwise, it is compact.
This writer's format is ideal for machine parsing, log aggregation, or integrating with external logging systems, especially useful for production environments or when logs need to be consumed by other tools.
import { JSONLogWriter, Logger } from "@graphql-hive/logger";
const log = new Logger({ writers: [new JSONLogWriter()] });
log.info({ my: "attrs" }, "Hello World!");Outputs:
{"my":"attrs","level":"info","msg":"Hello World!","timestamp":"2025-04-10T14:00:00.000Z"}Or pretty printed:
$ LOG_JSON_PRETTY=1 node example.js
{
"my": "attrs",
"level": "info",
"msg": "Hello World!",
"timestamp": "2025-04-10T14:00:00.000Z"
}Optional Writers
Hive Logger includes some writers for common loggers of the JavaScript ecosystem with optional peer dependencies.
LogTapeLogWriter
Use the LogTape logger library for writing Hive Logger's logs.
@logtape/logtape is an optional peer dependency, so you must install it first.
npm i @logtape/logtapeimport { Logger } from "@graphql-hive/logger";
import { LogTapeLogWriter } from "@graphql-hive/logger/writers/logtape";
import { configure, getConsoleSink } from "@logtape/logtape";
await configure({
sinks: { console: getConsoleSink() },
loggers: [{ category: "hive-gateway", sinks: ["console"] }],
});
const log = new Logger({ writers: [new LogTapeLogWriter()] });
log.info({ some: "attributes" }, "hello world");14:00:00.000 INF hive-gateway hello worldPinoLogWriter (Node.js Only)
Use the Node.js pino logger library for writing Hive Logger's
logs.
pino is an optional peer dependency, so you must install it first.
npm i pino pino-prettyimport pino from "pino";
import { Logger } from "@graphql-hive/logger";
import { PinoLogWriter } from "@graphql-hive/logger/writers/pino";
const pinoLogger = pino({
transport: {
target: "pino-pretty",
},
});
const log = new Logger({ writers: [new PinoLogWriter(pinoLogger)] });
log.info({ some: "attributes" }, "hello world");[14:00:00.000] INFO (20744): hello world
some: "attributes"WinstonLogWriter (Node.js Only)
Use the Node.js winston logger library for writing Hive
Logger's logs.
winston is an optional peer dependency, so you must install it first.
npm i winstonimport winston from "winston";
import { Logger } from "@graphql-hive/logger";
import { WinstonLogWriter } from "@graphql-hive/logger/writers/winston";
const winstonLogger = winston.createLogger({
transports: [new winston.transports.Console()],
});
const log = new Logger({ writers: [new WinstonLogWriter(winstonLogger)] });
log.info({ some: "attributes" }, "hello world");{"level":"info","message":"hello world","some":"attributes"}Winston logger does not have a "trace" log level. Hive Logger will instead use "verbose" when writing logs to Winston.
Custom Writers
You can implement custom log writers for the Hive Logger by creating a class that implements the
LogWriter interface. This interface requires a single write method, which receives the log
level, attributes, and message and an optional flush method allowing you to ensure all writer jobs
are completed when the logger is flushed.
Your writer can perform any action, such as sending logs to a file, external service, or custom destination.
Writers can be synchronous (returning void) or asynchronous (returning a Promise<void>). If your
writer performs asynchronous operations (like network requests or file writes), simply return a
promise from the write method.
Furthermore, you can optionally implement the flush method to ensure that all pending writes are
completed before the logger is disposed or flushed. This is particularly useful for asynchronous
writers that need to ensure all logs are written before the application exits or the logger is no
longer needed.
import { Attributes, LogLevel } from "@graphql-hive/logger";
interface LogWriter {
write(
level: LogLevel,
attrs: Attributes | null | undefined,
msg: string | null | undefined,
): void | Promise<void>;
flush?(): void | Promise<void>;
}Example of HTTP Writer
import {
Attributes,
ConsoleLogWriter,
Logger,
LogLevel,
LogWriter,
} from "@graphql-hive/logger";
class HTTPLogWriter implements LogWriter {
async write(level: LogLevel, attrs: Attributes, msg: string) {
await fetch("https://my-log-service.com", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ level, attrs, msg }),
});
}
}
const log = new Logger({
// send logs both to the HTTP logging service and output them to the console
writers: [new HTTPLogWriter(), new ConsoleLogWriter()],
});
log.info("Hello World!");
await log.flush(); // make sure all async writes settleExample of Daily File Log Writer (Node.js Only)
Here is an example of a custom log writer that writes logs to a daily log file. It will write to a file for each day in a given directory.
import fs from "node:fs/promises";
import path from "node:path";
import {
Attributes,
jsonStringify,
LogLevel,
LogWriter,
} from "@graphql-hive/logger";
export class DailyFileLogWriter implements LogWriter {
constructor(
private dir: string,
private name: string,
) {}
write(
level: LogLevel,
attrs: Attributes | null | undefined,
msg: string | null | undefined,
) {
const date = new Date().toISOString().split("T")[0];
const logfile = path.resolve(this.dir, `${this.name}_${date}.log`);
return fs.appendFile(logfile, jsonStringify({ level, msg, attrs }));
}
}Flushing and Non-Blocking Logging
The logger does not block when you log asynchronously. Instead, it tracks all pending async writes
internally. When you call log.flush() it waits for all pending writes to finish, ensuring no logs
are lost on shutdown. During normal operation, logging remains fast and non-blocking, even if some
writers are async.
This design allows you to use async writers without impacting the performance of your application or blocking the main thread.
After all writes have been completed, the logger will call the optional flush method on the
writers, executing any custom finalization logic you may have implemented.
Explicit Resource Management
The Hive Logger also supports Explicit Resource Management. This allows you to ensure that all pending asynchronous log writes are properly flushed before your application exits or when the logger is no longer needed.
You can use the logger with await using (in environments that support it) to wait for all log
operations to complete. This is especially useful in serverless or short-lived environments where
you want to guarantee that no logs are lost due to unfinished asynchronous operations.
import {
Attributes,
ConsoleLogWriter,
Logger,
LogLevel,
LogWriter,
} from "@graphql-hive/logger";
class HTTPLogWriter implements LogWriter {
async write(level: LogLevel, attrs: Attributes, msg: string) {
await fetch("https://my-log-service.com", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ level, attrs, msg }),
});
}
}
{
await using log = new Logger({
// send logs both to the HTTP logging service and output them to the console
writers: [new HTTPLogWriter(), new ConsoleLogWriter()],
});
log.info("Hello World!");
}
// logger went out of scope and all of the logs have been flushedHandling Async Write Errors
The Logger handles write errors for asynchronous writers by tracking all write promises. When
await log.flush() is called (including during async disposal), it waits for all pending writes to
settle. If any writes fail (i.e., their promises reject), their errors are collected and after all
writes have settled, if there were any errors, an AggregateError is thrown containing all the
individual write errors.
import { Logger } from "./Logger";
let i = 0;
const log = new Logger({
writers: [
{
async write() {
i++;
throw new Error("Write failed! #" + i);
},
},
],
});
// no fail during logs
log.info("hello");
log.info("world");
try {
await log.flush();
} catch (e) {
// flush will fail with each individually failed writes
console.error(e);
}Outputs:
AggregateError: Failed to flush 2 writes
at async <anonymous> (/project/example.js:20:3) {
[errors]: [
Error: Write failed! #1
at Object.write (/project/example.js:9:15),
Error: Write failed! #2
at Object.write (/project/example.js:9:15)
]
}