Wednesday, January 14, 2026

ETL vs. ELT vs. ETLT: What’s the Real Difference?

Here is the distinct function of each approach based on modern architecture needs 👇



📌 1. ETL (Extract, Transform, Load) — "The Classic"

Process: Data is extracted ➡ Transformed in a separate staging serverLoaded into the Warehouse.
Best For: Complex transformations, strict security/compliance masking before data lands, or legacy on-prem systems with limited compute.

☁️ 2. ELT (Extract, Load, Transform) — "The Modern Standard"

Process: Extract raw data ➡ Load immediately into the Warehouse ➡ Transform using SQL/dbt inside the warehouse.
Best For: Modern Cloud Data Warehouses (Snowflake, BigQuery, Redshift) where storage is cheap and compute is massive.

⚖️ 3. ETLT (Extract, Transform, Load, Transform) — "The Hybrid"

Process: Lightweight cleaning (PII masking) before loading ➡ Heavy analytics transformations after loading.
Best For: When you need both strict Data Quality checks (pre-load) and complex analytical modeling (post-load).



Monday, January 5, 2026

Oracle’s "Always Free" tier offers

 




Oracle Cloud is underrated for side projects

If you are still burning free credits on AWS, Azure, or GCP for your learning or pet projects, you are seriously missing out. Most "free tiers" are either time-boxed to 12 months or offer compute power so weak it barely runs a basic application.

I recently started deploying my projects to Oracle Cloud Infrastructure (OCI), and the resources they give away for free are genuinely surprising.

While others give you 1vCPU and 1GB of RAM, Oracle’s "Always Free" tier offers:
✅ 4 ARM Cores
✅ A massive 24 GB of RAM
✅ 10 TB of Data Egress monthly

This isn't just for static pages. This is enough power to run serious applications.


Check out this blog on how to create a linux instance on Oracle cloud and deploy a n8n project on it: https://lnkd.in/gAgxZViF

Tuesday, December 30, 2025

Healthcare AI System Architecture

 



"𝗧𝗵𝗶𝘀 𝗢𝗻𝗲 𝗗𝗶𝗮𝗴𝗿𝗮𝗺 𝗥𝗲𝗱𝘂𝗰𝗲𝗱 𝗔𝗜 𝗖𝗼𝘀𝘁 𝗯𝘆 𝟳𝟭%"

We didn’t change models. We changed where intelligence lives.

𝗧𝗵𝗲 𝗖𝗼𝗺𝗺𝗼𝗻 𝗠𝗶𝘀𝘁𝗮𝗸𝗲
Most engineering teams try to reduce AI cost by:
• Switching LLM providers
• Tuning prompts endlessly
• Debating benchmarks

That’s not where the leverage is.
The real shift happened when we stopped treating the model as the brain
and started treating the system as the brain.

𝗪𝗵𝗮𝘁 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗖𝗵𝗮𝗻𝗴𝗲𝗱
We introduced a 𝗠𝗮𝘀𝘁𝗲𝗿 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗣𝗹𝗮𝗻𝗲 (𝗠𝗖𝗣) instead of routing everything to a single LLM.

1️⃣ 𝗖𝗮𝗰𝗵𝗲
Repetitive and near-duplicate requests never hit a model again.
2️⃣ 𝗥𝗼𝘂𝘁𝗲𝗿
SLMs handle execution-heavy tasks.
LLMs handle judgment and ambiguity.
3️⃣ 𝗖𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝗰𝗲 𝗚𝗮𝘁𝗲𝘀
High confidence → instant response
Low confidence → controlled escalation
4️⃣ 𝗙𝗮𝗹𝗹𝗯𝗮𝗰𝗸𝘀
RAG, memory, or a stronger model — only when required.
No blind retries. No runaway costs.

𝗧𝗵𝗲 𝗢𝘂𝘁𝗰𝗼𝗺𝗲
• 71% reduction in AI cost
• Lower latency across workflows
• Predictable production behavior
• Fewer on-call surprises
Same models.
Very different results.

𝗧𝗵𝗲 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆
Architecture beats optimization. Always.