The only agent that thinks for itself

Autonomous Monitoring with self-learning AI built-in, operating independently across your entire stack.

Unlimited Metrics & Logs
Machine learning & MCP
5% CPU, 150MB RAM
3GB disk, >1 year retention
800+ integrations, zero config
Dashboards, alerts out of the box
> Discover Netdata Agents
Centralized metrics streaming and storage

Aggregate metrics from multiple agents into centralized Parent nodes for unified monitoring across your infrastructure.

Stream from unlimited agents
Long-term data retention
High availability clustering
Data replication & backup
Scalable architecture
Enterprise-grade security
> Learn about Parents
Fully managed cloud platform

Access your monitoring data from anywhere with our SaaS platform. No infrastructure to manage, automatic updates, and global availability.

Zero infrastructure management
99.9% uptime SLA
Global data centers
Automatic updates & patches
Enterprise SSO & RBAC
SOC2 & ISO certified
> Explore Netdata Cloud
Deploy Netdata Cloud in your infrastructure

Run the full Netdata Cloud platform on-premises for complete data sovereignty and compliance with your security policies.

Complete data sovereignty
Air-gapped deployment
Custom compliance controls
Private network integration
Dedicated support team
Kubernetes & Docker support
> Learn about Cloud On-Premises
Powerful, intuitive monitoring interface

Modern, responsive UI built for real-time troubleshooting with customizable dashboards and advanced visualization capabilities.

Real-time chart updates
Customizable dashboards
Dark & light themes
Advanced filtering & search
Responsive on all devices
Collaboration features
> Explore Netdata UI
Monitor on the go

Native iOS and Android apps bring full monitoring capabilities to your mobile device with real-time alerts and notifications.

iOS & Android apps
Push notifications
Touch-optimized interface
Offline data access
Biometric authentication
Widget support
> Download apps

Best energy efficiency

True real-time per-second

100% automated zero config

Centralized observability

Multi-year retention

High availability built-in

Zero maintenance

Always up-to-date

Enterprise security

Complete data control

Air-gap ready

Compliance certified

Millisecond responsiveness

Infinite zoom & pan

Works on any device

Native performance

Instant alerts

Monitor anywhere

80% Faster Incident Resolution
AI-powered troubleshooting from detection, to root cause and blast radius identification, to reporting.
True Real-Time and Simple, even at Scale
Linearly and infinitely scalable full-stack observability, that can be deployed even mid-crisis.
90% Cost Reduction, Full Fidelity
Instead of centralizing the data, Netdata distributes the code, eliminating pipelines and complexity.
Control Without Surrender
SOC 2 Type 2 certified with every metric kept on your infrastructure.
Integrations

800+ collectors and notification channels, auto-discovered and ready out of the box.

800+ data collectors
Auto-discovery & zero config
Cloud, infra, app protocols
Notifications out of the box
> Explore integrations
Real Results
46% Cost Reduction

Reduced monitoring costs by 46% while cutting staff overhead by 67%.

— Leonardo Antunez, Codyas

Zero Pipeline

No data shipping. No central storage costs. Query at the edge.

From Our Users
"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

No Query Language

Point-and-click troubleshooting. No PromQL, no LogQL, no learning curve.

Enterprise Ready
67% Less Staff, 46% Cost Cut

Enterprise efficiency without enterprise complexity—real ROI from day one.

— Leonardo Antunez, Codyas

SOC 2 Type 2 Certified

Zero data egress. Only metadata reaches the cloud. Your metrics stay on your infrastructure.

Full Coverage
800+ Collectors

Auto-discovered and configured. No manual setup required.

Any Notification Channel

Slack, PagerDuty, Teams, email, webhooks—all built-in.

From Our Users
"A Rare Unicorn"

Netdata gives more than you invest in it. A rare unicorn that obeys the Pareto rule.

— Eduard Porquet Mateu, TMB Barcelona

99% Downtime Reduction

Reduced website downtime by 99% and cloud bill by 30% using Netdata alerts.

— Falkland Islands Government

Real Savings
30% Cloud Cost Reduction

Optimized resource allocation based on Netdata alerts cut cloud spending by 30%.

— Falkland Islands Government

46% Cost Cut

Reduced monitoring staff by 67% while cutting operational costs by 46%.

— Codyas

Real Coverage
"Plugin for Everything"

Netdata has agent capacity or a plugin for everything, including Windows and Kubernetes.

— Eduard Porquet Mateu, TMB Barcelona

"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

Real Speed
Troubleshooting in 30 Seconds

From 2-3 minutes to 30 seconds—instant visibility into any node issue.

— Matthew Artist, Nodecraft

20% Downtime Reduction

20% less downtime and 40% budget optimization from out-of-the-box monitoring.

— Simon Beginn, LANCOM Systems

Pay per Node. Unlimited Everything Else.

One price per node. Unlimited metrics, logs, users, and retention. No per-GB surprises.

Free tier—forever
No metric limits or caps
Retention you control
Cancel anytime
> See pricing plans
What's Your Monitoring Really Costing You?

Most teams overpay by 40-60%. Let's find out why.

Expose hidden metric charges
Calculate tool consolidation
Customers report 30-67% savings
Results in under 60 seconds
> See what you're really paying
Your Infrastructure Is Unique. Let's Talk.

Because monitoring 10 nodes is different from monitoring 10,000.

On-prem & air-gapped deployment
Volume pricing & agreements
Architecture review for your scale
Compliance & security support
> Start a conversation
Monitoring That Sells Itself

Deploy in minutes. Impress clients in hours. Earn recurring revenue for years.

30-second live demos close deals
Zero config = zero support burden
Competitive margins & deal protection
Response in 48 hours
> Apply to partner
Per-Second Metrics at Homelab Prices

Same engine, same dashboards, same ML. Just priced for tinkerers.

Community: Free forever · 5 nodes · non-commercial
Homelab: $90/yr · unlimited nodes · fair usage
> Start monitoring your lab—free
$1,000 Per Referral. Unlimited Referrals.

Your colleagues get 10% off. You get 10% commission. Everyone wins.

10% of subscriptions, up to $1,000 each
Track earnings inside Netdata Cloud
PayPal/Venmo payouts in 3-4 weeks
No caps, no complexity
> Get your referral link
Cost Proof
40% Budget Optimization

"Netdata's significant positive impact" — LANCOM Systems

Calculate Your Savings

Compare vs Datadog, Grafana, Dynatrace

Savings Proof
46% Cost Reduction

"Cut costs by 46%, staff by 67%" — Codyas

30% Cloud Bill Savings

"Reduced cloud bill by 30%" — Falkland Islands Gov

Enterprise Proof
"Better Than Combined Alternatives"

"Better observability with Netdata than combining other tools." — TMB Barcelona

Real Engineers, <24h Response

DPA, SLAs, on-prem, volume pricing

Why Partners Win
Demo Live Infrastructure

One command, 30 seconds, real data—no sandbox needed

Zero Tickets, High Margins

Auto-config + per-node pricing = predictable profit

Homelab Ready
"Absolutely Incredible"

"We tested every monitoring system under the sun." — Benjamin Gabler, CEO Rocket.Net

76k+ GitHub Stars

3rd most starred monitoring project

Worth Recommending
Product That Delivers

Customers report 40-67% cost cuts, 99% downtime reduction

Zero Risk to Your Rep

Free tier lets them try before they buy

Never Fight Fires Alone

Docs, community, and expert help—pick your path to resolution.

Learn.netdata.cloud docs
Discord, Forums, GitHub
Premium support available
> Get answers now
60 Seconds to First Dashboard

One command to install. Zero config. 850+ integrations documented.

Linux, Windows, K8s, Docker
Auto-discovers your stack
> Start monitoring now
See Netdata in Action

Watch real-time monitoring in action—demos, tutorials, and engineering deep dives.

Product demos and walkthroughs
Real infrastructure, not staged
> Start with the 3-minute tour
Level Up Your Monitoring
Real problems. Real solutions. 112+ guides from basic monitoring to AI observability.
76,000+ Engineers Strong
615+ contributors. 1.5M daily downloads. One mission: simplify observability.
Per-Second. 90% Cheaper. Data Stays Home.
Side-by-side comparisons: costs, real-time granularity, and data sovereignty for every major tool.

See why teams switch from Datadog, Prometheus, Grafana, and more.

> Browse all comparisons
Edge-Native Observability, Born Open Source
Per-second visibility, ML on every metric, and data that never leaves your infrastructure.
Founded in 2016
615+ contributors worldwide
Remote-first, engineering-driven
Open source first
> Read our story
Promises We Publish—and Prove
12 principles backed by open code, independent validation, and measurable outcomes.
Open source, peer-reviewed
Zero config, instant value
Data sovereignty by design
Aligned pricing, no surprises
> See all 12 principles
Edge-Native, AI-Ready, 100% Open
76k+ stars. Full ML, AI, and automation—GPLv3+, not premium add-ons.
76,000+ GitHub stars
GPLv3+ licensed forever
ML on every metric, included
Zero vendor lock-in
> Explore our open source
Build Real-Time Observability for the World
Remote-first team shipping per-second monitoring with ML on every metric.
Remote-first, fully distributed
Open source (76k+ stars)
Challenging technical problems
Your code on millions of systems
> See open roles
Talk to a Netdata Human in <24 Hours
Sales, partnerships, press, or professional services—real engineers, fast answers.
Discuss your observability needs
Pricing and volume discounts
Partnership opportunities
Media and press inquiries
> Book a conversation
Your Data. Your Rules.
On-prem data, cloud control plane, transparent terms.
Trust & Scale
76,000+ GitHub Stars

One of the most popular open-source monitoring projects

SOC 2 Type 2 Certified

Enterprise-grade security and compliance

Data Sovereignty

Your metrics stay on your infrastructure

Validated
University of Amsterdam

"Most energy-efficient monitoring solution" — ICSOC 2023, peer-reviewed

ADASTEC (Autonomous Driving)

"Doesn't miss alerts—mission-critical trust for safety software"

Community Stats
615+ Contributors

Global community improving monitoring for everyone

1.5M+ Downloads/Day

Trusted by teams worldwide

GPLv3+ Licensed

Free forever, fully open source agent

Why Join?
Remote-First

Work from anywhere, async-friendly culture

Impact at Scale

Your work helps millions of systems

Compliance
SOC 2 Type 2

Audited security controls

GDPR Ready

Data stays on your infrastructure

PostgreSQL icon

PostgreSQL

PostgreSQL

Plugin: go.d.plugin Module: postgres

Overview

This collector monitors the activity and performance of Postgres servers, collects replication statistics, metrics for each database, table and index, and more.

It establishes a connection to the Postgres instance via a TCP or UNIX socket. To collect metrics for database tables and indexes, it establishes an additional connection for each discovered database.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

Default Behavior

Auto-Detection

By default, it detects instances running on localhost by trying to connect as root and netdata using known PostgreSQL TCP and UNIX sockets:

  • 127.0.0.1:5432
  • /var/run/postgresql/

Limits

Table and index metrics are not collected for databases with more than 50 tables or 250 indexes. These limits can be changed in the configuration file.

Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.

Setup

You can configure the postgres collector in two ways:

MethodBest forHow to
UIFast setup without editing filesGo to Nodes → Configure this node → Collectors → Jobs, search for postgres, then click + to add a job.
FileIf you prefer configuring via file, or need to automate deployments (e.g., with Ansible)Edit go.d/postgres.conf and add a job.

:::important

UI configuration requires paid Netdata Cloud plan.

:::

Prerequisites

Create netdata user

Create a user with granted pg_monitor or pg_read_all_stat built-in role.

To create the netdata user with these permissions, execute the following in the psql session, as a user with CREATEROLE privileges:

CREATE USER netdata;
GRANT pg_monitor TO netdata;

After creating the new user, restart the Netdata Agent with sudo systemctl restart netdata, or the appropriate method for your system.

Configuration

Options

The following options can be defined globally: update_every, autodetection_retry.

GroupOptionDescriptionDefaultRequired
Collectionupdate_everyData collection interval (seconds).1no
autodetection_retryAutodetection retry interval (seconds). Set 0 to disable.0no
TargetdsnPostgres connection string (DSN). See DSN syntax.postgres://postgres:postgres@127.0.0.1:5432/postgresyes
timeoutQuery timeout (seconds).2no
Filterscollect_databases_matchingDatabase selector. Controls which databases are included. Uses simple patterns.no
Limitsmax_db_tablesMaximum number of tables per database to collect metrics for (0 = no limit).50no
max_db_indexesMaximum number of indexes per database to collect metrics for (0 = no limit).250no
Virtual NodevnodeAssociates this data collection job with a Virtual Node.no

via UI

Configure the postgres collector from the Netdata web interface:

  1. Go to Nodes.
  2. Select the node where you want the postgres data-collection job to run and click the :gear: (Configure this node). That node will run the data collection.
  3. The Collectors → Jobs view opens by default.
  4. In the Search box, type postgres (or scroll the list) to locate the postgres collector.
  5. Click the + next to the postgres collector to add a new job.
  6. Fill in the job fields, then click Test to verify the configuration and Submit to save.
    • Test runs the job with the provided settings and shows whether data can be collected.
    • If it fails, an error message appears with details (for example, connection refused, timeout, or command execution errors), so you can adjust and retest.

via File

The configuration file name for this integration is go.d/postgres.conf.

The file format is YAML. Generally, the structure is:

update_every: 1
autodetection_retry: 0
jobs:
  - name: some_name1
  - name: some_name2

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/postgres.conf
Examples
TCP socket

An example configuration.

jobs:
  - name: local
    dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'
Unix socket

An example configuration.

jobs:
  - name: local
    dsn: 'host=/var/run/postgresql dbname=postgres user=netdata'
Unix socket (custom port)

Connect to PostgreSQL using a Unix socket with a non-default port (5433).

jobs:
  - name: local
    dsn: 'host=/var/run/postgresql port=5433 dbname=postgres user=netdata'
Multi-instance

Note: When you define multiple jobs, their names must be unique.

Local and remote instances.

jobs:
  - name: local
    dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'

  - name: remote
    dsn: 'postgresql://netdata@203.0.113.0:5432/postgres'

Metrics

Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Per PostgreSQL instance

These metrics refer to the entire monitored application.

This scope has no labels.

Metrics:

MetricDimensionsUnit
postgres.connections_utilizationusedpercentage
postgres.connections_usageavailable, usedconnections
postgres.connections_state_countactive, idle, idle_in_transaction, idle_in_transaction_aborted, disabledconnections
postgres.transactions_durationa dimension per buckettransactions/s
postgres.queries_durationa dimension per bucketqueries/s
postgres.locks_utilizationusedpercentage
postgres.checkpoints_ratescheduled, requestedcheckpoints/s
postgres.checkpoints_timewrite, syncmilliseconds
postgres.bgwriter_halts_ratemaxwrittenevents/s
postgres.buffers_io_ratecheckpoint, backend, bgwriterB/s
postgres.buffers_backend_fsync_ratefsynccalls/s
postgres.buffers_allocated_rateallocatedB/s
postgres.wal_io_ratewriteB/s
postgres.wal_files_countwritten, recycledfiles
postgres.wal_archiving_files_countready, donefiles/s
postgres.autovacuum_workers_countanalyze, vacuum_analyze, vacuum, vacuum_freeze, brin_summarizeworkers
postgres.txid_exhaustion_towards_autovacuum_percemergency_autovacuumpercentage
postgres.txid_exhaustion_perctxid_exhaustionpercentage
postgres.txid_exhaustion_oldest_txid_numxidxid
postgres.catalog_relations_countordinary_table, index, sequence, toast_table, view, materialized_view, composite_type, foreign_table, partitioned_table, partitioned_indexrelations
postgres.catalog_relations_sizeordinary_table, index, sequence, toast_table, view, materialized_view, composite_type, foreign_table, partitioned_table, partitioned_indexB
postgres.uptimeuptimeseconds
postgres.databases_countdatabasesdatabases

Per repl application

These metrics refer to the replication application.

Labels:

LabelDescription
applicationapplication name

Metrics:

MetricDimensionsUnit
postgres.replication_app_wal_lag_sizesent_lag, write_lag, flush_lag, replay_lagB
postgres.replication_app_wal_lag_timewrite_lag, flush_lag, replay_lagseconds

Per repl slot

These metrics refer to the replication slot.

Labels:

LabelDescription
slotreplication slot name

Metrics:

MetricDimensionsUnit
postgres.replication_slot_files_countwal_keep, pg_replslot_filesfiles

Per database

These metrics refer to the database.

Labels:

LabelDescription
databasedatabase name

Metrics:

MetricDimensionsUnit
postgres.db_transactions_ratiocommitted, rollbackpercentage
postgres.db_transactions_ratecommitted, rollbacktransactions/s
postgres.db_connections_utilizationusedpercentage
postgres.db_connections_countconnectionsconnections
postgres.db_cache_io_ratiomisspercentage
postgres.db_io_ratememory, diskB/s
postgres.db_ops_fetched_rows_ratiofetchedpercentage
postgres.db_ops_read_rows_ratereturned, fetchedrows/s
postgres.db_ops_write_rows_rateinserted, deleted, updatedrows/s
postgres.db_conflicts_rateconflictsqueries/s
postgres.db_conflicts_reason_ratetablespace, lock, snapshot, bufferpin, deadlockqueries/s
postgres.db_deadlocks_ratedeadlocksdeadlocks/s
postgres.db_locks_held_countaccess_share, row_share, row_exclusive, share_update, share, share_row_exclusive, exclusive, access_exclusivelocks
postgres.db_locks_awaited_countaccess_share, row_share, row_exclusive, share_update, share, share_row_exclusive, exclusive, access_exclusivelocks
postgres.db_temp_files_created_ratecreatedfiles/s
postgres.db_temp_files_io_ratewrittenB/s
postgres.db_sizesizeB

Per table

These metrics refer to the database table.

Labels:

LabelDescription
databasedatabase name
schemaschema name
tabletable name
parent_tableparent table name

Metrics:

MetricDimensionsUnit
postgres.table_rows_dead_ratiodeadpercentage
postgres.table_rows_countlive, deadrows
postgres.table_ops_rows_rateinserted, deleted, updatedrows/s
postgres.table_ops_rows_hot_ratiohotpercentage
postgres.table_ops_rows_hot_ratehotrows/s
postgres.table_cache_io_ratiomisspercentage
postgres.table_io_ratememory, diskB/s
postgres.table_index_cache_io_ratiomisspercentage
postgres.table_index_io_ratememory, diskB/s
postgres.table_toast_cache_io_ratiomisspercentage
postgres.table_toast_io_ratememory, diskB/s
postgres.table_toast_index_cache_io_ratiomisspercentage
postgres.table_toast_index_io_ratememory, diskB/s
postgres.table_scans_rateindex, sequentialscans/s
postgres.table_scans_rows_rateindex, sequentialrows/s
postgres.table_autovacuum_since_timetimeseconds
postgres.table_vacuum_since_timetimeseconds
postgres.table_autoanalyze_since_timetimeseconds
postgres.table_analyze_since_timetimeseconds
postgres.table_null_columnsnullcolumns
postgres.table_sizesizeB
postgres.table_bloat_size_percbloatpercentage
postgres.table_bloat_sizebloatB

Per index

These metrics refer to the table index.

Labels:

LabelDescription
databasedatabase name
schemaschema name
tabletable name
parent_tableparent table name
indexindex name

Metrics:

MetricDimensionsUnit
postgres.index_sizesizeB
postgres.index_bloat_size_percbloatpercentage
postgres.index_bloat_sizebloatB
postgres.index_usage_statusused, unusedstatus

Alerts

The following alerts are available:

Alert nameOn metricDescription
postgres_total_connection_utilizationpostgres.connections_utilizationaverage total connection utilization over the last minute
postgres_acquired_locks_utilizationpostgres.locks_utilizationaverage acquired locks utilization over the last minute
postgres_txid_exhaustion_percpostgres.txid_exhaustion_percpercent towards TXID wraparound
postgres_db_cache_io_ratiopostgres.db_cache_io_ratioaverage cache hit ratio in db ${label:database} over the last minute
postgres_db_transactions_rollback_ratiopostgres.db_cache_io_ratioaverage aborted transactions percentage in db ${label:database} over the last five minutes
postgres_db_deadlocks_ratepostgres.db_deadlocks_ratenumber of deadlocks detected in db ${label:database} in the last minute
postgres_table_cache_io_ratiopostgres.table_cache_io_ratioaverage cache hit ratio in db ${label:database} table ${label:table} over the last minute
postgres_table_index_cache_io_ratiopostgres.table_index_cache_io_ratioaverage index cache hit ratio in db ${label:database} table ${label:table} over the last minute
postgres_table_toast_cache_io_ratiopostgres.table_toast_cache_io_ratioaverage TOAST hit ratio in db ${label:database} table ${label:table} over the last minute
postgres_table_toast_index_cache_io_ratiopostgres.table_toast_index_cache_io_ratioaverage index TOAST hit ratio in db ${label:database} table ${label:table} over the last minute
postgres_table_bloat_size_percpostgres.table_bloat_size_percbloat size percentage in db ${label:database} table ${label:table}
postgres_table_last_autovacuum_timepostgres.table_autovacuum_since_timetime elapsed since db ${label:database} table ${label:table} was vacuumed by the autovacuum daemon
postgres_table_last_autoanalyze_timepostgres.table_autoanalyze_since_timetime elapsed since db ${label:database} table ${label:table} was analyzed by the autovacuum daemon
postgres_index_bloat_size_percpostgres.index_bloat_size_percbloat size percentage in db ${label:database} table ${label:table} index ${label:index}

Troubleshooting

Debug Mode

Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.

To troubleshoot issues with the postgres collector, run the go.d.plugin with the debug option enabled. The output should give you clues as to why the collector isn’t working.

  • Navigate to the plugins.d directory, usually at /usr/libexec/netdata/plugins.d/. If that’s not the case on your system, open netdata.conf and look for the plugins setting under [directories].

    cd /usr/libexec/netdata/plugins.d/
    
  • Switch to the netdata user.

    sudo -u netdata -s
    
  • Run the go.d.plugin to debug the collector:

    ./go.d.plugin -d -m postgres
    

    To debug a specific job:

    ./go.d.plugin -d -m postgres -j jobName
    

Getting Logs

If you’re encountering problems with the postgres collector, follow these steps to retrieve logs and identify potential issues:

  • Run the command specific to your system (systemd, non-systemd, or Docker container).
  • Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.

System with systemd

Use the following command to view logs generated since the last Netdata service restart:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep postgres

System without systemd

Locate the collector log file, typically at /var/log/netdata/collector.log, and use grep to filter for collector’s name:

grep postgres /var/log/netdata/collector.log

Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.

Docker Container

If your Netdata runs in a Docker container named “netdata” (replace if different), use this command:

docker logs netdata 2>&1 | grep postgres

The observability platform companies need to succeed

Sign up for free

Want a personalised demo of Netdata for your use case?

Book a Demo