<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Pranit Codes]]></title><description><![CDATA[Pranit Codes]]></description><link>https://blogs.pranitpatil.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 21:25:52 GMT</lastBuildDate><atom:link href="https://blogs.pranitpatil.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[AI Buzzwords Explained Simply]]></title><description><![CDATA[LLM (Large Language Model)
ChatGPT, Claude, Gemini are LLM. Basically LLM is an AI Model which can understand and generate human like responses. It is trained on massive amount of data mostly billions of sentences, articles, documents, etc.
Tokens
To...]]></description><link>https://blogs.pranitpatil.com/ai-buzzwords-explained-simply</link><guid isPermaLink="true">https://blogs.pranitpatil.com/ai-buzzwords-explained-simply</guid><category><![CDATA[AI]]></category><category><![CDATA[mcp]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Pranit Patil]]></dc:creator><pubDate>Sat, 15 Nov 2025 14:23:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763216512985/25d53bd2-31a2-4854-be5a-bd6b1913641b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-llm-large-language-model">LLM (Large Language Model)</h2>
<p>ChatGPT, Claude, Gemini are LLM. Basically LLM is an AI Model which can understand and generate human like responses. It is trained on massive amount of data mostly billions of sentences, articles, documents, etc.</p>
<h2 id="heading-tokens">Tokens</h2>
<p>Tokens are <strong>small pieces of text</strong> that AI uses to read and understand language. AI <strong>does not read full sentences</strong> like humans. Instead, it breaks everything into tiny units called <strong>tokens</strong>. A token can be a whole word, part of a word, punctuation, a space, even an emoji.</p>
<h3 id="heading-examples"><strong>Examples:</strong></h3>
<p><strong>“AI is changing the world.”</strong> Possible tokens: "Artificial", " intelligence", " is", " changing", " the", " world", "."</p>
<p><strong>“Unbelievable performance today!”</strong> "Un", "believable", " performance", " today", "!"<br />The word count here is 3, but the token count is 5 because tokens aren’t always full words, long or uncommon words often get <strong>split</strong> into multiple tokens. So when estimating tokens for a paragraph, it’s safer to add about 10–20% to the word count.</p>
<h2 id="heading-context">Context</h2>
<p>Context is basically how much previous text or conversation the LLM can remember, and use at one time. You might have seen <strong>“The conversation is too long. Let's start a new one.”</strong> or <strong>“Context length exceeded.”</strong> messages in ChatGPT; It is because of the context length. Every AI model has a fixed limit called a <strong>context window</strong> (like 16k, 128k, 200k tokens, etc).</p>
<p>In a context window your current message, all previous chat messages, system instructions, any documents you’ve given it gets stored. LLM remembers that and personalises the responses.</p>
<p><strong>More context = more consistent answers.</strong></p>
<p><strong>GPT 5</strong> has ~ 400k tokens context window. <strong>Claude Sonnet 4</strong> has ~ 200k - 1000k tokens context window. <strong>Gemini 2.5</strong> has ~ 30k - 1000k tokens context window.</p>
<h2 id="heading-rag-retrieval-augmented-generation">RAG (Retrieval-Augmented Generation)</h2>
<p>As we know every model has a limited number of context window. to get more personalised answers we need to feed more data to the LLM.</p>
<p>Lets suppose you have 1000 page manual/ to work with a software. You want a answer to a question which might need to go through 2-3 pages; and you need a human like small and crisp answer. so you decide to give all the data to the LLM. and you get the answer as <strong>“Context length exceeded”😂.</strong> It happens because LLM can process a limited amount of tokens per session. So there is no chance that you can feed all your data to the LLM and it will return the result. <strong>But</strong> if you know that the answer may be lies in Chapter number 3 which contains hardly 10-11 pages. and this much amount of tokens can be handled by the LLM.</p>
<p>RAG basically does the same thing. we can feed all the data to the RAG. The RAG will index the data based on its semantic meaning (the actual meaning). and whenever necessary it will retrieve the specific data which might contain that information. Then we can feed this information to LLM and he can answer our queries very well.</p>
<h2 id="heading-vector-embeddings">Vector Embeddings</h2>
<p>When you visit a bank generally the gatekeeper asks you what are your concerns and according to that he asks you to go to the specific counter. Then you realise that all the peoples present in the queue has the same concern or work to do. Similarly in RAG when you give the data to the RAG a <strong>magical functions</strong> gives you some address/location in form of embeddings, generally these are array of floats. so when the next time the user queries something it also generate a location/embeddings for his query. and checks which kind of data is closer to that location and returns it. which then can be passed to LLM to give them more context.</p>
<p>RAGs generally use <strong>Vector Databases</strong> which are like an array of floats which represents the semantic meaning of that data. Vector embeddings are a way of converting words, sentences, or even images into numbers that represent their meaning. These numbers <strong>help AI understand similarity.</strong> For example, embeddings help AI know that “car” is closer in meaning to “vehicle” than to “banana.”</p>
<h2 id="heading-embeddings-model">Embeddings Model</h2>
<p>Remember the magic function we discussed in vector embeddings this is the <strong>embedding model.</strong> It is kind of AI model that converts text (or images, audio, code) into <strong>numbers</strong>; specifically, a long list of numbers called a <strong>vector</strong>.</p>
<h2 id="heading-tools-in-ai">Tools in AI</h2>
<p>LLM is just a conversational AI. He can give response from the data which it is trained on. also LLM cant perform any tasks like web browsing or sending email. to do so with help of tools we give LLM the ability to perform such tasks.</p>
<p>For Example: a Weather Tool. LLM don’t have access to the latest data we have to feed llm the latest data to get the accurate results. Generally while developing the app we design some tools. Here a weather tool calls an api to get weather update in Mumbai. so when user give prompt like “What is the colour of sky“ it returns blue; because it is general information. But when we ask “what’s the weather in Mumbai today”. he dont know because AI doesn’t have access to the latest data. so it will call the weather tool to get the weather and return a human like response.</p>
<h2 id="heading-ai-agent">AI Agent</h2>
<p>AI Agent is an AI which can think, plan and take actions of it own to complete a task based on the tools he has. Any AI which can decide which tool to call by its own is an AI Agent.</p>
<p><strong>Example: Email &amp; Calendar Agent</strong><br />User gives prompt “Schedule a meeting with Pranit for Thursday at 6 PM and send him an invitation.”</p>
<p>The Agent will call the <em>calendar tool</em> to check availability. again it will call calendar tool to create an event. then it will call email tool to send email to Pranit. AI decided itself what actions to take and it performed the actions. thats the only difference between AI and AI Agent.</p>
<h2 id="heading-mcp-model-context-protocol">MCP (Model Context Protocol)</h2>
<p>Before MCP every AI model had a different way to connect to tools which was complicated and messy. Then MCP got introduced which is a standard protocol for communication between apps and LLMs. It is as similar to REST Apis but for LLMs. just like we write api endpoints in REST to work with the data in MCP server we write actions; But with more context for LLMs. It consists the description for each action, when to use, will it affect the data or not? Is it a pure action or not, etc.</p>
<h3 id="heading-example-note-taking-app">Example: Note Taking App</h3>
<p>You create an MCP server that exposes actions like: <code>createNote</code> <code>getNotes</code> <code>updateNote</code><br />Now ChatGPT or Claude can interact with your app through MCP. Even in cursor you can register your MCP server and it will get the data from there.<br />When you give a prompt like <strong>“Create a note saying I finished the meeting at 5 PM.”</strong> AI calls <code>createNote</code> through MCP. The note appears in your app automatically</p>
]]></content:encoded></item><item><title><![CDATA[Building Scalable SaaS: Multi-Tenant Architecture with PostgreSQL & TypeORM (Design & Implementation)]]></title><description><![CDATA[In the world of SaaS applications, multi-tenancy is a crucial architectural pattern that allows a single application instance to serve multiple customers (tenants) efficiently. Choosing the right multi-tenant strategy can significantly impact scalabi...]]></description><link>https://blogs.pranitpatil.com/building-scalable-saas-multi-tenant-architecture-with-postgresql-and-typeorm-design-and-implementation</link><guid isPermaLink="true">https://blogs.pranitpatil.com/building-scalable-saas-multi-tenant-architecture-with-postgresql-and-typeorm-design-and-implementation</guid><dc:creator><![CDATA[Pranit Patil]]></dc:creator><pubDate>Mon, 10 Mar 2025 18:25:11 GMT</pubDate><content:encoded><![CDATA[<p>In the world of SaaS applications, multi-tenancy is a crucial architectural pattern that allows a single application instance to serve multiple customers (tenants) efficiently. Choosing the right multi-tenant strategy can significantly impact scalability, maintainability, and security. In this blog, we will explore different multi-tenancy approaches and focus on implementing a <strong>schema-based multi-tenancy</strong> using <strong>PostgreSQL, NestJS, and TypeORM</strong>.</p>
<p>Multi-tenancy is a software architecture where a single application serves multiple customers while ensuring data isolation. Each tenant can have their own data and configurations while sharing the same infrastructure.</p>
<h2 id="heading-common-multi-tenancy-approaches">Common Multi-Tenancy Approaches</h2>
<ul>
<li><p><strong>Database Per Tenant:</strong> Each tenant owns its separate database, which provides strong isolation and good compliance. If any of your tenants want to keep their data on their server, you can use this. However, this approach comes with high operational overhead.</p>
</li>
<li><p><strong>Schema-per-Tenant:</strong> Each tenant has its own schema within the same database, providing good isolation. The main challenge is handling database migrations, which we will discuss in this blog, along with exploring possible solutions. If you have fewer than 100 tenants, this approach is recommended.</p>
</li>
<li><p><strong>Row-Level Multi-Tenancy:</strong> All tenants share the same schema and database, with each table entry including a tenant ID. This method is simple to implement and highly scalable. However, there is a risk of data leakage, and as data accumulates, database queries may take longer to execute. If you plan to serve more than 100 tenants, this approach is advisable.</p>
</li>
</ul>
<p>Here, we will choose the second approach because I plan to serve a few large organizations and we want good isolation. We need to isolate the tenants in their own schemas, which means we must select a database that supports schemas. PostgreSQL and Snowflake are examples of databases that support schemas.</p>
<h2 id="heading-choosing-tech-stack">Choosing Tech Stack</h2>
<p>For the database, we will choose <strong>Postgres</strong> because it is open-source and offers many extra features. For the backend, we will use <strong>NestJS</strong> because <strong>TypeORM</strong> works really well with it.</p>
<h3 id="heading-why-typeorm-why-not-prisma">Why TypeORM why not Prisma?</h3>
<p>Prisma is a popular ORM for Node-based applications and makes generating migrations easy. However, it doesn't support targeting multiple schemas with minimal effort. Although Prisma has a MultiSchema option, it requires specifying the schema name in each model, which isn't very convenient. It's actually easier to manage this without any ORM, but for a better developer experience, we will use TypeORM. TypeORM also helps create schema-based connections with the database, which is why it is our main choice for this application.</p>
<h2 id="heading-implementation-strategy">Implementation Strategy</h2>
<h3 id="heading-managing-common-and-tenant-schemas">Managing Common and Tenant Schemas</h3>
<p>To store global data, such as tenant metadata, configurations, and cross-tenant order tracking, we will create a <code>common</code> schema instead of using the default <code>public</code> schema. A dedicated TypeORM configuration will be used to generate migrations and create tables in this schema.</p>
<p>For tenant schemas, migrations need to be generated once and applied across multiple schemas. TypeORM generates migrations by comparing the current database schema with entity definitions. If no schema exists, TypeORM might redundantly generate <code>CREATE TABLE</code> queries. To prevent this, we use the <code>public</code> schema as a placeholder for tracking schema changes. This approach allows TypeORM to recognize tables and generate accurate migrations without duplication.</p>
<h3 id="heading-establishing-dynamic-database-connections">Establishing Dynamic Database Connections</h3>
<p>To manage connections dynamically, we will extend the tenant-specific TypeORM configuration by appending the schema name. In a real-world scenario, connection pooling and caching mechanisms will ensure efficiency and prevent unnecessary reconnections.</p>
<h3 id="heading-running-migrations-for-each-tenant">Running Migrations for Each Tenant</h3>
<p>A script will retrieve all tenant schemas from the database and apply pending migrations to each one. TypeORM will check the migrations table within each schema to determine which migrations have already been applied, preventing redundant execution. Additionally, we must run migrations in the <code>public</code> schema to ensure consistency in future schema updates.</p>
<h2 id="heading-the-coding-part">The CODING Part..</h2>
<blockquote>
<p>Please don't just copy the code as it is; it may have bugs.</p>
</blockquote>
<p>First we will <a target="_blank" href="https://docs.nestjs.com/first-steps">setup</a> an Nest js project. with the following structure.</p>
<pre><code class="lang-plaintext">/node_modules
/src
    - /config
        - common-orm.config.ts  // for migrating common schema.
        - tenant-orm.config.ts  // for tenant schema migration and connections.
    - /migrations               // will contain all the migrations
        - /common
        - /tenant
    - /entities
        - /public
            - org.entity.ts     // orgs table
        - /tenant   
            - user.entity.ts    // users table
    - /modules
        - /users
            - users.controller.ts 
            - users.module.ts
            - users.service.ts
    - /tenancy
        - tenancy.middleware.ts  // for dynamic connections handelling
        - tenancy.utils.ts
package.json
run-tenant-migrations.ts          // Most important part of the setup. The migration script.
</code></pre>
<h3 id="heading-orm-config-files">Orm Config Files</h3>
<p>For common-orm.config.ts create a datasource object but with schema name as “common“.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { DataSource } <span class="hljs-keyword">from</span> <span class="hljs-string">'typeorm'</span>;
<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">'dotenv'</span>;

dotenv.config();

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> AppDataSource = <span class="hljs-keyword">new</span> DataSource({
  <span class="hljs-keyword">type</span>: <span class="hljs-string">'postgres'</span>,
  host: process.env.DB_HOST,
  port: <span class="hljs-built_in">parseInt</span>(process.env.DB_PORT || <span class="hljs-string">'5432'</span>, <span class="hljs-number">10</span>) || <span class="hljs-number">5432</span>,
  username: process.env.DB_USERNAME,
  password: process.env.DB_PASSWORD,
  database: process.env.DB_NAME,
  entities: [
    __dirname + <span class="hljs-string">'/../entities/common/*.entity{.ts,.js}'</span>, <span class="hljs-comment">// ✅ Ensure common entities are loaded</span>
  ],
  schema: <span class="hljs-string">'common'</span>,    <span class="hljs-comment">// make sure to add this line</span>
  migrations: [__dirname + <span class="hljs-string">'/../migrations/common/*.ts'</span>], <span class="hljs-comment">// ✅ Common schema migrations</span>
});
</code></pre>
<p>tenant-orm.config.ts is very similar to the common-orm.config.ts just the difference is we wont mention the schema name here so it will default target to public schema and at the time of generating the migration the query will look like.</p>
<pre><code class="lang-typescript">CREATE TABLE user .... <span class="hljs-comment">// ✅ </span>

<span class="hljs-comment">// instad of </span>
CREATE TABLE <span class="hljs-string">"tenant"</span>.user   <span class="hljs-comment">// X</span>
</code></pre>
<p>Creating a generic query will help us in easier migration so basically we can just run this query after setting the search path (<strong>SET SEARCH_PATH = “schema_name“</strong>) and the generated SQL commands will create tables in the targeted schema.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// tenant-orm.config.ts</span>

<span class="hljs-keyword">import</span> { DataSource } <span class="hljs-keyword">from</span> <span class="hljs-string">'typeorm'</span>;
<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> dotenv <span class="hljs-keyword">from</span> <span class="hljs-string">'dotenv'</span>;

dotenv.config();

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> AppDataSource = <span class="hljs-keyword">new</span> DataSource({
  <span class="hljs-keyword">type</span>: <span class="hljs-string">'postgres'</span>,
  host: process.env.DB_HOST,
  port: <span class="hljs-built_in">parseInt</span>(process.env.DB_PORT || <span class="hljs-string">'5432'</span>, <span class="hljs-number">10</span>) || <span class="hljs-number">5432</span>,
  username: process.env.DB_USERNAME,
  password: process.env.DB_PASSWORD,
  database: process.env.DB_NAME,
  entities: [
    __dirname + <span class="hljs-string">'/../entities/common/*.entity{.ts,.js}'</span>,
    __dirname + <span class="hljs-string">'/../entities/tenant/*.entity{.ts,.js}'</span>, <span class="hljs-comment">// ✅ Ensure tenant entities are loaded</span>
  ],
  migrations: [__dirname + <span class="hljs-string">'/../migrations/tenant/*.ts'</span>], <span class="hljs-comment">// ✅ Tenant schema migrations</span>
});
</code></pre>
<h3 id="heading-entities">Entities</h3>
<p>For common entities, we need to specifically mention the schema name. However, for tenant schemas, we don't explicitly mention the schema name because we will assign it dynamically.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// entities/common/org.entity.ts </span>

<span class="hljs-keyword">import</span> { Entity, PrimaryGeneratedColumn, Column } <span class="hljs-keyword">from</span> <span class="hljs-string">'typeorm'</span>;

<span class="hljs-meta">@Entity</span>({ schema: <span class="hljs-string">'common'</span>, name: <span class="hljs-string">'orgs'</span> })
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> Org {
  <span class="hljs-meta">@PrimaryGeneratedColumn</span>(<span class="hljs-string">'increment'</span>)
  id: <span class="hljs-built_in">string</span>;

  <span class="hljs-meta">@Column</span>({ unique: <span class="hljs-literal">true</span> })
  schemaName: <span class="hljs-built_in">string</span>; <span class="hljs-comment">// Example: schema_one, schema_two</span>
}
</code></pre>
<pre><code class="lang-typescript"><span class="hljs-comment">// entities/tenant/user.entity.ts</span>

<span class="hljs-keyword">import</span> { Entity, PrimaryGeneratedColumn, Column } <span class="hljs-keyword">from</span> <span class="hljs-string">'typeorm'</span>;

<span class="hljs-meta">@Entity</span>({ name: <span class="hljs-string">'users'</span> }) <span class="hljs-comment">// Schema will be assigned dynamically</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> User {
  <span class="hljs-meta">@PrimaryGeneratedColumn</span>(<span class="hljs-string">'increment'</span>)
  id: <span class="hljs-built_in">string</span>;

  <span class="hljs-meta">@Column</span>()
  name: <span class="hljs-built_in">string</span>;

  <span class="hljs-meta">@Column</span>({ <span class="hljs-keyword">type</span>: <span class="hljs-string">'text'</span> })
  tenantId: <span class="hljs-built_in">string</span>;
}
</code></pre>
<h3 id="heading-connection-handling">Connection handling</h3>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Injectable } <span class="hljs-keyword">from</span> <span class="hljs-string">'@nestjs/common'</span>;
<span class="hljs-keyword">import</span> { DataSource, DataSourceOptions } <span class="hljs-keyword">from</span> <span class="hljs-string">'typeorm'</span>;
<span class="hljs-keyword">import</span> { AppDataSource } <span class="hljs-keyword">from</span> <span class="hljs-string">'../config/tenant-orm.config'</span>;

<span class="hljs-keyword">const</span> tenantConnections: { [schemaName: <span class="hljs-built_in">string</span>]: DataSource } = {};

<span class="hljs-meta">@Injectable</span>()
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> TenantConnectionService {
  <span class="hljs-keyword">async</span> getTenantConnection(tenantSchema: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">Promise</span>&lt;DataSource&gt; {
    <span class="hljs-comment">// If a connection is already available use it</span>
    <span class="hljs-keyword">if</span> (tenantConnections[tenantSchema]) {
      <span class="hljs-keyword">return</span> tenantConnections[tenantSchema];
    }

    <span class="hljs-keyword">const</span> dataSource = <span class="hljs-keyword">new</span> DataSource({
      ...AppDataSource.options,
      schema: tenantSchema, <span class="hljs-comment">// Assign tenant-specific schema</span>
      name: <span class="hljs-string">`<span class="hljs-subst">${tenantSchema}</span>`</span>, <span class="hljs-comment">// Unique connection name</span>
    } <span class="hljs-keyword">as</span> DataSourceOptions);

    <span class="hljs-keyword">await</span> dataSource.initialize();
    tenantConnections[tenantSchema] = dataSource;

    <span class="hljs-keyword">return</span> dataSource;
  }
}
</code></pre>
<p>This is a utility function for dynamic connection handling. Lets suppose a user form org 1 tries to make an API request most likely in his JWT token we will mention the schema name in the payload. so in the middleware after the authentication we will pass the schema name in this function and acquire the connection. which will target that users schema.</p>
<h2 id="heading-running-the-migrations">Running The Migrations</h2>
<p>Running Migrations for the common schema is pretty much straightforward.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// GENETATE Migrations for common</span>
yarn typeorm-ts-node-commonjs migration:generate ./src/migrations/common/init -d ./src/config/common-orm.config.ts

<span class="hljs-comment">// RUN Migrations for common</span>
yarn typeorm-ts-node-commonjs migration:run -d ./src/config/common-orm.config.ts
</code></pre>
<pre><code class="lang-typescript">
<span class="hljs-comment">// GENETATE Migrations for tenant</span>
yarn typeorm-ts-node-commonjs migration:generate ./src/migrations/tenant/init -d ./src/config/tenant-orm.config.ts
</code></pre>
<p>For the tenant migrations first we will create migration files with the tenant-orm.config.ts.</p>
<blockquote>
<p>Before running any migrations please make sure that for the schema which you want to run the migrations are already exists in the database. Or you can handle this part in the script also.</p>
</blockquote>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { DataSource } <span class="hljs-keyword">from</span> <span class="hljs-string">'typeorm'</span>;
<span class="hljs-keyword">import</span> { AppDataSource } <span class="hljs-keyword">from</span> <span class="hljs-string">'./src/config/tenant-orm.config'</span>;

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">applyMigrationsToTenants</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> dataSource = <span class="hljs-keyword">await</span> AppDataSource.initialize();

  <span class="hljs-comment">// Fetch all tenant schemas from the public.tenants table</span>
  <span class="hljs-keyword">const</span> tenants: { schemaName: <span class="hljs-built_in">string</span> }[] = <span class="hljs-keyword">await</span> dataSource.query(
    <span class="hljs-string">'SELECT "schemaName" FROM "common".tenants'</span>,
  );
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Tenants'</span>, tenants);
  <span class="hljs-keyword">await</span> dataSource.destroy();

  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> tenant <span class="hljs-keyword">of</span> tenants) {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`🔄 Running migrations for tenant: <span class="hljs-subst">${tenant.schemaName}</span>`</span>);

    <span class="hljs-keyword">const</span> tenantDataSource = <span class="hljs-keyword">new</span> DataSource({
      ...AppDataSource.options,
      schema: tenant.schemaName,
      migrations: [__dirname + <span class="hljs-string">'/src/migrations/tenant/*.ts'</span>], <span class="hljs-comment">// Apply tenant-specific migrations</span>
      extra: {
        options: <span class="hljs-string">`set search_path='<span class="hljs-subst">${tenant.schemaName}</span>'`</span>, <span class="hljs-comment">// Ensure migrations run in the tenant schema</span>
      },
    });

    <span class="hljs-comment">// This is just for the demo. See if you can optimize this.</span>
    <span class="hljs-keyword">await</span> tenantDataSource.initialize();
    <span class="hljs-keyword">await</span> tenantDataSource.query(<span class="hljs-string">`SET search_path TO '<span class="hljs-subst">${tenant.schemaName}</span>'`</span>);
    <span class="hljs-keyword">await</span> tenantDataSource.runMigrations();
    <span class="hljs-keyword">await</span> tenantDataSource.destroy();
  }

  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'✅ Migrations applied to all tenants!'</span>);
}

applyMigrationsToTenants().catch(<span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> <span class="hljs-built_in">console</span>.error(err));
</code></pre>
<p>When we create tables for the common schema we will add entries for all the required org schemas in orgs table including public schema (as a dummy org for keeping track of tables). So when we run this script it will take all the schemas from the database, create a connection for that particular schema by extending the tenant DataSource, Run the migrations for that particular schema and close the connection.</p>
<p>This method streamlines the management of multiple schemas, reducing the complexity and time typically required for such tasks.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In conclusion, building a scalable SaaS application using a schema-based multi-tenancy approach with PostgreSQL and TypeORM offers a robust solution for serving multiple tenants while ensuring data isolation and efficient resource utilization. By leveraging PostgreSQL's schema support and TypeORM's dynamic connection handling, developers can create a flexible and maintainable architecture. This approach is particularly suitable for applications serving a limited number of large organizations, providing a balance between isolation and operational efficiency. Implementing this strategy requires careful planning of schema management, dynamic connections, and migration processes, but it ultimately streamlines the management of tenant data and enhances the scalability of the application.</p>
]]></content:encoded></item></channel></rss>