How to Set Up an MCP Server for PostgreSQL

  • January 6, 2026
  • Education

Model Context Protocol (MCP) servers create a secure bridge between AI assistants and PostgreSQL databases, enabling natural language queries instead of manual SQL writing. However, the most widely-used implementation has a critical SQL injection vulnerability that could allow attackers to bypass read-only protections entirely. This guide covers secure setup procedures, implementation options, and production-ready configurations. For enterprise teams requiring PostgreSQL database connectivity with built-in security controls and auto-generated documentation, platforms like DreamFactory provide an alternative approach that eliminates many MCP security concerns while delivering instant REST API access.


Key Takeaways

  • The deprecated Node reference server at @modelcontextprotocol/server-postgres v0.6.2 has over 20,000 weekly downloads despite containing a SQL injection vulnerability
  • Postgres MCP Pro offers read-only transaction wrapping with index tuning, health checks, and explain plan features
  • Basic setup takes 30-60 minutes; production-ready deployment requires 2-3 hours including security configuration
  • Three deployment methods available: Docker (recommended), Python via pip, and Node.js for basic testing only
  • SSL/TLS encryption and read-only database users are mandatory for any production deployment
  • MCP implementation significantly reduces time spent on ad-hoc data analysis tasks through natural language query interfaces

Understanding Model Context Protocol for PostgreSQL

MCP servers establish a standardized communication layer that allows AI assistants like Claude, Cursor, and VS Code Copilot to execute SQL queries, inspect database schemas, and provide optimization recommendations through conversational interfaces. Rather than writing SQL manually, developers and analysts ask questions in plain language while the MCP server translates requests into proper database operations.

The protocol supports several core capabilities:

  • Schema discovery and inspection for understanding database structure
  • Read-only query execution with transaction wrapping for safety
  • Performance analysis tools including index recommendations
  • Database health monitoring with automated checks
  • Multiple access modes for different security requirements

This approach transforms database interaction from a technical skill requiring SQL expertise into a conversational workflow accessible to broader teams. However, the security implications of granting AI systems database access demand careful implementation.


Choosing the Right PostgreSQL MCP Server Implementation

Three primary implementations exist, each with distinct security profiles and feature sets:

Postgres MCP Pro (Recommended)

The Postgres MCP Pro implementation provides the most comprehensive feature set for production use. Built by Crystal DBA, it includes index tuning algorithms that test thousands of combinations, health checks covering buffer cache and vacuum status, and explain plans with hypothetical index support.

Key advantages include:

  • Restricted and unrestricted access modes with SQL parsing
  • Connection pooling with session cleanup
  • Query execution time limits
  • Docker and Python deployment options

Node Reference Server (Deprecated)

The original @modelcontextprotocol/server-postgres package remains widely used despite being archived with known vulnerabilities. Security researchers demonstrated that attackers can bypass read-only mode with commands like COMMIT; DROP TABLE users; due to improper input sanitization.

This implementation should be avoided entirely for any database containing sensitive data. The Zed editor fork addresses these vulnerabilities if Node.js deployment is required.

AWS Labs Aurora MCP

For organizations running Aurora PostgreSQL, the AWS Labs implementation provides Secrets Manager–backed credentials (with IAM permissions) and RDS Data API support. This option integrates naturally with existing AWS security infrastructure but limits deployment flexibility.


Security Requirements Before Installation

Before connecting any AI client to your database, establish proper security controls. Database breaches through misconfigured MCP servers represent a significant risk that standard tutorials often minimize.

Creating Read-Only Database Users

Establish dedicated credentials with minimal permissions. First, create a role named readonly_user with login capability and assign a strong password. Next, grant CONNECT privilege on your target database to readonly_user. Then, grant USAGE privilege on the public schema to readonly_user. After that, grant SELECT privilege on all existing tables in the public schema to readonly_user. Finally, alter default privileges in the public schema to automatically grant SELECT on future tables to readonly_user.

Never use administrative accounts for MCP connections. The principle of least privilege applies especially when AI systems execute queries based on natural language interpretation.

Enforcing SSL/TLS Encryption

Configure PostgreSQL for encrypted connections by modifying postgresql.conf. Set password_encryption to scram-sha-256 and enable ssl by setting it to on.

Update pg_hba.conf to require SSL for MCP connections. Add a hostssl entry for your database, specifying readonly_user with your trusted CIDR range (e.g., your VPC CIDR or specific IP ranges) and authentication method scram-sha-256. Avoid using 0.0.0.0/0 in production environments—always scope access to trusted networks and rely on firewall rules or security groups as an additional layer of defense.

These configurations ensure credentials and query results remain protected in transit. Organizations handling sensitive data should review DreamFactory's security guide for additional enterprise security considerations.


Step-by-Step Installation Guide

Docker Installation (Recommended)

Docker deployment provides consistent environments and simplified credential management. Pull the crystaldba/postgres-mcp image using docker pull. Then run the container in interactive mode using docker run with the -i and --rm flags (stdio-based MCP servers require foreground execution to communicate over stdin/stdout). Set the DATABASE_URI environment variable using -e, providing your connection string in the format postgresql://readonly_user:password@host.docker.internal:5432/your_db?sslmode=verify-ca (note: if using sslmode=verify-full, you must also provide a trusted root certificate via the sslrootcert parameter or ensure your OS trust store contains the appropriate CA certificate). Finally, specify the image name crystaldba/postgres-mcp followed by the --access-mode=restricted flag.

If you need HTTP/SSE transport instead of stdio, map port 8000 with -p 8000:8000.

The access-mode restricted flag enables SQL parsing that blocks dangerous operations. Use host.docker.internal for connections to databases running on the host machine.

For Linux systems where host.docker.internal isn't available by default, add the --add-host flag with the value host.docker.internal:host-gateway to your docker run command.

Python Installation

Install via pip for direct Python integration using the command pip install postgres-mcp.

Configure using environment variables or command-line arguments. This method suits developers who prefer managing dependencies through virtual environments.

Verifying Installation

Test connectivity before configuring AI clients. Use psql with your connection string in the format postgresql://readonly_user:password@host:5432/dbname?sslmode=verify-ca.

Successful connection confirms network access, credentials, and SSL configuration function correctly. Address any connection failures at this stage rather than debugging through AI client logs.


Configuring Your AI Client for PostgreSQL MCP

Each AI client requires specific configuration to connect with MCP servers. The configuration process varies by platform but follows similar patterns.

Claude Desktop Configuration

Locate the configuration file based on your operating system. For macOS, navigate to ~/Library/Application Support/Claude/claude_desktop_config.json. For Windows, navigate to %APPDATA%\Claude\claude_desktop_config.json.

Add the server configuration by creating an mcpServers object. Inside it, create a postgres entry with the following properties: set command to docker, provide args as an array containing run, -i, --rm, -e, DATABASE_URI, crystaldba/postgres-mcp, and --access-mode=restricted. Create an env object with DATABASE_URI set to your full connection string postgresql://readonly_user:password@host.docker.internal:5432/your_db?sslmode=verify-ca.

Restart Claude Desktop after saving configuration changes.

VS Code Integration

The Cursor cookbook documents MCP integration for VS Code-based editors. Open Command Palette using Cmd/Ctrl+Shift+P. Run the MCP: Add Server command. Select stdio or HTTP transport. Finally, provide connection details when prompted.

For Cursor IDE specifically, create a file at .cursor/mcp.json in your project root. Inside, create an mcpServers object with a postgres entry. For Postgres MCP Pro (recommended), set command to postgres-mcp and args to an array containing your connection string. Alternatively, if using the deprecated Node reference server (not recommended due to SQL injection vulnerabilities), set command to npx and args to an array containing -y, @modelcontextprotocol/server-postgres, followed by your PostgreSQL connection string.


Testing Your MCP-PostgreSQL Connection

After configuration, verify functionality through your AI client.

Basic connectivity tests:

  • Ask to list all schemas in the database
  • Request to show tables in the public schema
  • Execute SELECT count(*) FROM users

Schema exploration:

  • Request a description of the structure of the orders table
  • Ask what foreign key relationships exist in this database

Query execution:

  • Ask to show the top 10 customers by order count
  • Request total revenue for last month

Successful responses confirm the entire pipeline works correctly. Failed queries typically indicate connection issues, permission problems, or configuration syntax errors.


Production Deployment Best Practices

Moving from development to production requires additional considerations for reliability and security.

Enabling Performance Extensions

Install PostgreSQL extensions that enhance MCP capabilities. Create the pg_stat_statements extension if it doesn't exist using CREATE EXTENSION IF NOT EXISTS pg_stat_statements. Similarly, create the hypopg extension using CREATE EXTENSION IF NOT EXISTS hypopg.

The pg_stat_statements extension is used by Postgres MCP Pro to enable workload analysis for identifying slow queries. HypoPG supports hypothetical index testing without creating actual indexes.

Implementing Query Timeouts

Configure execution limits to prevent runaway queries from affecting database performance. Restricted mode in Postgres MCP Pro includes built-in timeout controls.

Monitoring and Logging

Track MCP server activity through PostgreSQL query logs for audit trails, Docker container logs for connection issues, and AI client logs for request/response debugging.

Organizations requiring comprehensive API monitoring should evaluate platforms that provide built-in analytics and rate limiting.


Common Challenges and Troubleshooting

Connection Refused Errors

Verify database accessibility from the MCP server host. Check firewall rules, security groups (for cloud databases), and PostgreSQL's listen_addresses configuration. Test with psql before investigating MCP-specific issues.

SSL Certificate Verification Failures

Self-signed certificates fail with sslmode=verify-full. Options include providing CA certificate via sslrootcert parameter, using sslmode=require for testing environments only, or deploying proper certificates from a trusted CA for production. Note that verify-full requires both a valid certificate chain and hostname verification—use verify-ca if hostname verification isn't configured yet.

Permission Denied Errors

Run GRANT USAGE ON SCHEMA public TO readonly_user if schema access fails. Verify the user can execute basic SELECT statements through psql before testing MCP connectivity.

Docker Networking Issues

Linux users encountering host connectivity problems should add --add-host=host.docker.internal:host-gateway to docker run commands, or use the host's actual IP address in connection strings.


Why DreamFactory Enhances PostgreSQL Data Access

While MCP servers enable AI-powered database queries, they address only one aspect of the broader data accessibility challenge. Organizations typically need multiple interfaces—REST APIs, real-time integrations, and secure third-party access—that MCP alone cannot provide.

DreamFactory generates production-ready REST APIs for PostgreSQL in minutes through configuration rather than code. The platform introspects database schemas automatically, creating CRUD endpoints, complex filtering, pagination, and joins without developer intervention.

Key capabilities that complement or replace MCP workflows:

  • Automatic Swagger documentation generated for every API endpoint
  • Role-based access control at service, endpoint, table, and field levels
  • Built-in authentication supporting OAuth 2.0, SAML, LDAP, and Active Directory
  • Server-side scripting in PHP, Python, or Node.js for custom business logic
  • SQL injection protection via query decomposition and prepared statements

For teams evaluating AI-driven database tools, DreamFactory provides enterprise security controls that MCP implementations typically lack. The platform supports 20+ database types through native connectors, enabling unified API access across SQL, NoSQL, and file storage systems.

Unlike MCP servers requiring individual configuration per AI client, DreamFactory APIs work with any HTTP client, including AI assistants, mobile applications, and third-party integrations. This flexibility eliminates vendor lock-in while providing consistent security enforcement across all access patterns.

Request a demo to see how DreamFactory can simplify PostgreSQL API generation while maintaining the security controls enterprise deployments require.

Frequently Asked Questions

What is the main security risk with PostgreSQL MCP servers?

The most significant risk involves the deprecated Node reference server at @modelcontextprotocol/server-postgres v0.6.2, which contains a SQL injection vulnerability allowing attackers to bypass read-only protections. Despite being archived, this package maintains over 20,000 weekly downloads. Organizations should use Postgres MCP Pro or the Zed fork instead, and never connect production databases to vulnerable implementations.

How long does MCP server setup take for PostgreSQL?

Basic setup requires 30-60 minutes including installation and client configuration. Production-ready deployment takes 2-3 hours when including proper security configuration, SSL certificate setup, read-only user creation, and testing. Teams should allocate additional time for user training and documentation.

Can I use MCP servers with databases other than PostgreSQL?

MCP servers exist for multiple database types, though PostgreSQL implementations are most mature. Each database requires its own MCP server configuration. For organizations needing unified access across multiple database types, platforms like DreamFactory provide single-platform connectivity to SQL, NoSQL, and file storage systems through auto-generated REST APIs.

What happens to query results when using MCP with AI assistants?

Query results pass through the AI client's conversation context, meaning data may be logged by AI providers depending on their data processing policies. For sensitive data, review your AI provider's retention policies and consider using enterprise agreements that limit data storage. Local AI models eliminate third-party data exposure entirely but require significant infrastructure investment.

Should I use restricted or unrestricted access mode?

Use restricted mode for any database containing production data or sensitive information, as this mode enables SQL parsing that blocks dangerous operations while still allowing read queries. Unrestricted mode suits development environments where schema modifications or write operations are acceptable. Never deploy unrestricted mode against production databases regardless of other security controls.