PostgreSQL full-text search can cover many apps. Use a simple decision rule, starter query, and indexing checklist to know when to add a search engine.

Most people don’t ask for “full-text search.” They ask for a search box that feels fast and finds what they meant on the first page. If results are slow, noisy, or oddly ordered, users don’t care whether you used PostgreSQL full-text search or a separate engine. They just stop trusting search.
This is one decision: keep search inside Postgres, or add a dedicated search engine. The target isn’t perfect relevance. It’s a solid baseline that’s quick to ship, easy to run, and good enough for how your app is actually used.
For many apps, PostgreSQL full-text search is enough for a long time. If you have a few text fields (title, description, notes), basic ranking, and a filter or two (status, category, tenant), Postgres can handle it without extra infrastructure. You get fewer moving parts, simpler backups, and fewer “why is search down but the app is up?” incidents.
“Enough” usually means you can hit three targets at the same time:
A concrete example: a SaaS dashboard where users search projects by name and notes. If a query like “onboarding checklist” returns the right project in the top 5, in under a second, and you’re not constantly tuning analyzers or reindexing jobs, that’s “enough.” When you can’t meet those targets without piling on complexity, that’s when “built-in search vs search engine” becomes a real question.
Teams often describe search in features, not outcomes. The useful move is translating each feature into what it costs to build, tune, and keep reliable.
Early requests usually sound like: typo tolerance, facets and filters, highlights, “smart” ranking, and autocomplete. For a first version, separate must-haves from nice-to-haves. A basic search box usually only needs to find relevant items, handle common word forms (plural, tense), respect simple filters, and stay fast as your table grows. That’s exactly where PostgreSQL full-text search tends to fit.
Postgres shines when your content lives in normal text fields and you want search close to your data: help articles, blog posts, support tickets, internal docs, product titles and descriptions, or notes on customer records. These are mostly “find me the right record” problems, not “build a search product” problems.
Nice-to-haves are where complexity creeps in. Typo tolerance and rich autocomplete usually push you toward extra tooling. Facets are possible in Postgres, but if you want many facets, deep analytics, and instant counts across huge datasets, a dedicated engine starts looking more attractive.
The hidden cost is rarely the license fee. It’s the second system. Once you add a search engine, you also add data syncing and backfills (and the bugs they create), monitoring and upgrades, “why is search showing old data?” support work, and two sets of relevance knobs.
If you’re unsure, start with Postgres, ship something simple, and only add another engine when a clear requirement can’t be met.
Use a three-check rule. If you pass all three, stay with PostgreSQL full-text search. If you fail one badly, consider a dedicated search engine.
Relevance needs: Are “good enough” results acceptable, or do you need near-perfect ranking across many edge cases (typos, synonyms, “people also searched,” personalized results)? If you can tolerate occasional imperfect ordering, Postgres usually works.
Query volume and latency: How many searches per second do you expect at peak, and what’s your real latency budget? If search is a small slice of traffic and you can keep queries fast with proper indexes, Postgres is fine. If search becomes a top workload and starts competing with core reads and writes, that’s a warning.
Complexity: Are you searching one or two text fields, or combining many signals (tags, filters, time decay, popularity, permissions) and multiple languages? The more complex the logic, the more you’ll feel friction inside SQL.
A safe starting point is simple: ship a baseline in Postgres, log slow queries and “no result” searches, and only then decide. Many apps never outgrow it, and you avoid running and syncing a second system too early.
Red flags that usually point toward a dedicated engine:
Green flags for staying in Postgres:
PostgreSQL full-text search is a built-in way to turn text into something the database can search quickly, without scanning every row. It works best when your content lives in Postgres already and you want fast, decent search with predictable ops.
There are three pieces worth knowing:
ts_rank (or ts_rank_cd) to put more relevant rows first.Language configuration matters because it changes how Postgres treats words. With the right config, “running” and “run” can match (stemming), and common filler words can be ignored (stop words). With the wrong config, search can feel broken because normal user wording no longer matches what was indexed.
Prefix matching is the feature people reach for when they want “typeahead-ish” behavior, like matching “dev” to “developer.” In Postgres full-text search, that’s typically done with a prefix operator (for example, term:*). It can improve perceived quality, but it often increases work per query, so treat it as an optional upgrade, not a default.
What Postgres isn’t trying to be: a complete search platform with every feature. If you need fuzzy spelling correction, advanced autocomplete, learning-to-rank, complex analyzers per field, or distributed indexing across many nodes, you’re outside the built-in comfort zone. For many apps, though, PostgreSQL full-text search gives you most of what users expect with far fewer moving parts.
Here’s a small, realistic shape for content you want to search:
-- Minimal example table
CREATE TABLE articles (
id bigserial PRIMARY KEY,
title text NOT NULL,
body text NOT NULL,
updated_at timestamptz NOT NULL DEFAULT now()
);
A good baseline for PostgreSQL full-text search is: build a query from what the user typed, filter rows first (when you can), then rank the remaining matches.
-- $1 = user search text, $2 = limit, $3 = offset
WITH q AS (
SELECT websearch_to_tsquery('english', $1) AS query
)
SELECT
a.id,
a.title,
a.updated_at,
ts_rank_cd(
setweight(to_tsvector('english', coalesce(a.title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(a.body, '')), 'B'),
q.query
) AS rank
FROM articles a
CROSS JOIN q
WHERE
a.updated_at >= now() - interval '2 years' -- example safe filter
AND (
setweight(to_tsvector('english', coalesce(a.title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(a.body, '')), 'B')
) @@ q.query
ORDER BY rank DESC, a.updated_at DESC, a.id DESC
LIMIT $2 OFFSET $3;
A few details that save time later:
WHERE before ranking (status, tenant_id, date ranges). You rank fewer rows, so it stays fast.ORDER BY (like updated_at, then id). This keeps pagination stable when many results have the same rank.websearch_to_tsquery for user input. It handles quotes and simple operators in a way people expect.Once this baseline works, move the to_tsvector(...) expression into a stored column. That avoids recalculating it on every query and makes indexing straightforward.
Most “PostgreSQL full-text search is slow” stories come down to one thing: the database is building the search document on every query. Fix that first by storing a prebuilt tsvector and indexing it.
tsvector: generated column or trigger?A generated column is the simplest option when your search document is built from columns in the same row. It stays correct automatically and is hard to forget during updates.
Use a trigger-maintained tsvector when the document depends on related tables (for example, combining a product row with its category name), or when you want custom logic that’s not easy to express as a single generated expression. Triggers add moving parts, so keep them small and test them.
Create a GIN index on the tsvector column. That’s the baseline that makes PostgreSQL full-text search feel instant for typical app search.
A setup that works for many apps:
tsvector in the same table as the rows you search most often.tsvector.@@ against the stored tsvector, not to_tsvector(...) computed on the fly.VACUUM (ANALYZE) after large backfills so the planner understands the new index.Keeping the vector in the same table is usually faster and simpler. A separate search table can make sense if the base table is very write-heavy, or if you’re indexing a combined document that spans many tables and you want to update it on your own schedule.
Partial indexes can help when you only search a subset of rows, like status = 'active', a single tenant in a multi-tenant app, or a specific language. They reduce index size and can speed up searches, but only if your queries always include the same filter.
You can get surprisingly good results with PostgreSQL full-text search if you keep relevance rules simple and predictable.
The easiest win is field weighting: matches in a title should count more than matches deep in the body. Build a combined tsvector where the title is weighted higher than the description, then rank with ts_rank or ts_rank_cd.
If you need “fresh” or “popular” items to float up, do it carefully. A small boost is fine, but don’t let it override text relevance. A practical pattern is: rank by text first, then break ties with recency, or add a capped bonus so an irrelevant new item doesn’t beat an older perfect match.
Synonyms and phrase matching are where expectations often diverge. Synonyms aren’t automatic; you only get them if you add a thesaurus or custom dictionary, or you expand the query terms yourself (for example, treating “auth” as “authentication”). Phrase matching also isn’t the default: plain queries match words anywhere, not “this exact phrase.” If users type quoted phrases or long questions, consider phraseto_tsquery or websearch_to_tsquery to better match how people search.
Mixed-language content needs a decision. If you know the language per document, store it and generate the tsvector with the right configuration (English, Russian, etc.). If you don’t, a safe fallback is indexing with the simple configuration (no stemming), or keeping two vectors: one language-specific when known, one simple for everything.
To validate relevance, keep it small and concrete:
This is usually enough for PostgreSQL full-text search in app search boxes like “templates,” “docs,” or “projects.”
Most “PostgreSQL full-text search is slow or irrelevant” stories come from a few avoidable mistakes. Fixing them is usually simpler than adding a new search system.
One common trap is treating tsvector like a computed value that stays correct on its own. If you store tsvector in a column but don’t update it on every insert and update, results will look random because the index no longer matches the text. If you compute to_tsvector(...) on the fly inside the query, results can be correct but slower, and you may miss the benefit of a dedicated index.
Another easy way to hurt performance is ranking before you narrow the candidate set. ts_rank is useful, but it should usually run after Postgres has used the index to find matching rows. If you calculate rank for a huge portion of the table (or join to other tables first), you can turn a fast search into a table scan.
People also expect “contains” search to behave like LIKE '%term%'. Leading wildcards don’t map well to full-text search because FTS is based on words (lexemes), not arbitrary substrings. If you need substring search for product codes or partial IDs, use a different tool for that case (for example, trigram indexing) rather than blaming FTS.
Performance issues often come from result handling, not matching. Two patterns to watch:
OFFSET pagination, which makes Postgres skip more and more rows as you page.Operational issues matter too. Index bloat can build up after lots of updates, and reindexing can be expensive if you wait until things are already painful. Measure real query times (and check EXPLAIN ANALYZE) before and after changes. Without numbers, it’s easy to “fix” PostgreSQL full-text search by making it worse in a different way.
Before you blame PostgreSQL full-text search, run these checks. Most “Postgres search is slow or irrelevant” bugs come from missing basics, not from the feature itself.
Build a real tsvector: store it in a generated or maintained column (not computed on every query), use the right language config (english, simple, etc.), and apply weights if you mix fields (title > subtitle > body).
Normalize what you index: keep noisy fields (IDs, boilerplate, navigation text) out of the tsvector, and trim huge blobs if users never search them.
Create the right index: add a GIN index on the tsvector column and confirm it’s used in EXPLAIN. If only a subset is searchable (for example status = 'published'), a partial index can cut size and speed up reads.
Keep tables healthy: dead tuples can slow index scans. Regular vacuuming matters, especially on frequently updated content.
Have a reindex plan: big migrations or bloated indexes sometimes need a controlled reindex window.
Once the data and index look right, focus on query shape. PostgreSQL full-text search is fast when it can narrow the candidate set early.
Filter first, then rank: apply strict filters (tenant, language, published, category) before ranking. Ranking thousands of rows you’ll later discard is wasted work.
Use stable ordering: order by rank and then a tie-breaker like updated_at or id so results don’t jump between refreshes.
Avoid “query does everything”: if you need fuzzy matching or typo tolerance, do it intentionally (and measure). Don’t accidentally force sequential scans.
Test real queries: collect the top 20 searches, check relevance by hand, and keep a small expected-results list to catch regressions.
Watch slow paths: log slow queries, review EXPLAIN (ANALYZE, BUFFERS), and monitor index size and cache hit rate so you can spot when growth changes behavior.
A SaaS help center is a good place to start because the goal is simple: help people find the one article that answers their question. You have a few thousand articles, each with a title, short summary, and body text. Most visitors type 2 to 5 words like “reset password” or “billing invoice.”
With PostgreSQL full-text search, this can feel done surprisingly fast. You store a tsvector for the combined fields, add a GIN index, and rank by relevance. Success looks like: results show up in under 100 ms, the top 3 results are usually correct, and you don’t need to babysit the system.
Then the product grows. Support wants to filter by product area, platform (web, iOS, Android), and plan (free, pro, business). Docs writers want synonyms, “did you mean,” and better handling of typos. Marketing wants analytics like “top searches with zero results.” Traffic climbs and search becomes one of the busiest endpoints.
Those are the signals that a dedicated search engine might be worth the cost:
A practical migration path is to keep Postgres as the source of truth, even after you add a search engine. Start by logging search queries and no-result cases, then run an async sync job that copies only the searchable fields into the new index. Run both in parallel for a while and switch gradually, instead of betting everything on day one.
If your search is mostly “find documents that contain these words” and your dataset isn’t massive, PostgreSQL full-text search is usually enough. Start there, get it working, and only add a dedicated engine when you can name the missing feature or the scaling pain.
A recap worth keeping handy:
tsvector, add a GIN index, and your ranking needs are basic.A practical next step: implement the starter query and index from earlier sections, then log a few simple metrics for a week. Track p95 query time, slow queries, and a rough success signal like “search -> click -> no immediate bounce” (even a basic event counter helps). You’ll quickly see whether you need better ranking or just better UX (filters, highlighting, better snippets).
Start planning a dedicated search engine when one of these becomes a real requirement (not a nice-to-have): strong autocomplete or instant search on every keystroke at scale, strong typo tolerance and spell correction, facets and aggregations across many fields with fast counts, advanced relevance tooling (synonym sets, learning-to-rank, per-query boosts), or sustained high load and large indexes that are hard to keep fast.
If you want to move fast on the app side, Koder.ai (koder.ai) can be a handy way to prototype the search UI and API via chat, then iterate safely using snapshots and rollback while you measure what users actually do.
PostgreSQL full-text search is “enough” when you can hit three things at once:
If you can meet these with a stored tsvector + a GIN index, you’re usually in a great place.
Default to PostgreSQL full-text search first. It ships faster, keeps your data and search in one place, and avoids building and maintaining a separate indexing pipeline.
Move to a dedicated engine when you have a clear requirement Postgres can’t meet well (high-quality typo tolerance, rich autocomplete, heavy faceting, or search load that competes with core database work).
A simple rule is: stay in Postgres if you pass these three checks:
If you fail one badly (especially relevance features like typos/autocomplete, or high search traffic), consider a dedicated engine.
Use Postgres FTS when your search is mostly “find the right record” across a few fields like title/body/notes, with simple filters (tenant, status, category).
It’s a strong fit for help centers, internal docs, tickets, blog/article search, and SaaS dashboards where users search by project names and notes.
A good baseline query usually:
websearch_to_tsquery.Store a prebuilt tsvector and add a GIN index. That avoids recomputing to_tsvector(...) on every request.
Practical setup:
Use a generated column when the search document is built from columns in the same row (simple and hard to break).
Use a trigger-maintained column when your search text depends on related tables or custom logic.
Default choice: generated column first, triggers only when you truly need cross-table composition.
Start with predictable relevance:
Then validate using a small list of real user queries and expected top results.
Postgres FTS is word-based, not substring-based. So it won’t behave like LIKE '%term%' for arbitrary partial strings.
If you need substring search (IDs, codes, fragments), handle that separately (often with trigram indexing) instead of forcing full-text search to do a job it’s not designed for.
Common signals you’ve outgrown Postgres FTS:
A practical path is to keep Postgres as the source of truth and add async indexing when the requirement is clear.
@@ against a stored tsvector.ts_rank/ts_rank_cd plus a stable tie-breaker like updated_at, id.This keeps results relevant, fast, and stable for pagination.
tsvectortsvector_column @@ tsquery.This is the most common fix when search feels slow.