THE Kartik Voice
THE Kartik Voice
  • Home
  • Topics
    • Artificial Intelligence (AI)
    • Machine Learning (ML)
    • Cloud
    • Data Engineering
      • Microsoft Azure
      • AWS
      • GCP
      • Oracle
    • Databricks
      • Delta Lake
      • Delta Tables
      • Delta Live Tables (DLT)
      • Delta Engine
      • Delta Sharing
      • Delta Transaction Log
      • Delta Merge
      • Unity Catalog
    • Governance
      • Microsoft Purview
      • Dataedo
    • DevOps
  • Interview Prep
  • Tools
    • Visual Studio Code
    • Git & GitHub
    • Postman
    • DBeaver / DB Tools
    • Azure Data Studio
    • Power BI
    • Dataedo
    • Microsoft Purview
  • Contact

AI & ML

Data Engineering

Data Governance

Recently, DeepSeek released a paper and came up with a new approach called Context Optical Compression that stores text as vision tokens to help AI process long context in a more efficient manner. 
I have broken down their whole research in an easy and beginner-friendly manner so you can understand what exactly DeepSeek discovered and why the whole AI world is talking about it.

Before going into depth, let's talk about a few things

OCR (Optical Character Recognition) - OCR means reading a text from an image. 
Token - A token is a small piece of a word. Example - "Kartik" may be split into "Kar" + "tik" or "Kartik".
LLM (Large Language Model) - LLM models are trained on massive datasets and provide answers depending on the information they have learnt.
Context Window - Is the memory used by the AI to recall the previous conversation.
Vision Token - A small piece of an image that AI can read and understand.

DeepSeek-OCR: Contexts Optical Compression

Goal - The Idea is to encode the equivalent of a thousand words in a single image and have the model read them back. This approach has the potential to transform the way of thinking regarding AI memory and long context processing. This can be useful to process very long contexts, possibly reaching 10 million tokens or beyond, which is its aim.

Why does AI need this?

Currently, the Large Language Models (LLMs) like DeepSeek, ChatGPT, and Gemini talk to us using tokens and they do struggle with processing long textual content due to limited memory. Because of this, they forget the past conversation.
Right Now, 
  • 1 word = 1 token
  • More words - more tokens - more memory needed
There is always a memory limit for a normal account and even for the premium account. The Longer you chat to an AI Model, the more tokens it uses. Once the limit is full, it starts forgetting the old parts of the conversation.  This is the Big Problem with the Model. 

What is DeepSeek's new Idea?

What if instead of storing the text as text, we could store text in an image and later these images are broken down into vision tokens and AI can read them back. This idea is called COC (Context Optical Compression).
Breakdown of the COC-
  • Context - memory or conversation history 
  • Optical - using images
  • Compression - making things compressed (smaller)

Why Images? - Because Images take less space and can store well amount of the data. Imagine you have taken a picture of the classroom board instead of writing everything in your notebook. The photo stores everything in less space.
Same with the AI:
That's the reason DeepSeek wants to use Images as AI Memory.

How Good Is DeepSeek’s Compression?

DeepSeek OCR can convert text into vision tokens and convert these vision tokens into text with high accuracy. Here is how well DeepSeek performs in the benchmark:

According to the stats. 100 tokens can store around 1000 words with almost perfect accuracy. This is 10 times smaller than the normal way.

Can This Change the Future of AI Memory?

Yes, today AI can handle maybe 128k to 1M tokens in long chats. With DeepSeek's compression idea, it could go to 10M to 20M. 
Benefits from this - AI can remember more, faster response, cheaper computation

How Do Images Become Tokens?

DeepSeek uses a model called ViT: Vision Transformer to read images. In Simple terms, ViT cuts an image into small patches. Each patch becomes a token that the AI can understand.
Example:
A patch of 16x16 pixels has 256 pixels.
Each pixel has 3 colors (red, green, blue).
So 256 x 3 = 768 numbers = embedding for that patch. This lets the AI understand the image in small parts.

DeepSeek’s Secret Ingredient: Deep Encoder

The big issue with this process is that Images can produce too many vision tokens, which again increases the memory. So DeepSeek added a smart tool called the Deep Encoder.
Deep Encoder helps with:
  • Reducing the number of vision tokens
  • keeping only important parts
  • handling high-quality images better

It works in 2 stages:

Stage 1: 
SAM (Segment Anything Model) - SAM always looks at which parts of the image matter the most. 
Example - If the image has a page with text along with the background, SAM focuses on the text part, not the blank spaces.

Stage 2: 
CLIP + ViT + Deep Encoder - Once SAM selects important areas:

CLIP ViT creates embeddings (understandable picture pieces)
Deep Encoder compresses these pieces into fewer tokens

Finally, it sends the compressed vision tokens to DeepSeek-3B MOE.

What Is DeepSeek-3B MOE (Mixture of Experts)?

It is a decoder model that chooses which expert module is best for the job. It has 3B total parameters, but only 570M are active at a time. This makes it fast and efficient. This decoder reads the vision tokens and converts them back to text.

Different Modes - DeepSeek OCR has different modes depending on how much detail is needed:


Why does this matter so much?

DeepSeek is not just improving the Optical Character Recognition but they are changing how AI stores, compresses and remembers information. This is going to be the biggest change in the way LLM works.
This research could lead to:

  • AI systems with huge memory to remember more context
  • Better knowledge storage 
  • Faster processing
  • Cheaper AI costs

This could become a new type of AI memory.

Where Is It Available?

Research paper is released - https://github.com/deepseek-ai/DeepSeek-OCR/blob/main/DeepSeek_OCR_paper.pdf
Code is on GitHub - https://github.com/deepseek-ai/DeepSeek-OCR

Happy Exploring! Happy Learning!   




Each day, there is some article on AI a new LLM, Gen AI transforming the world, or AI Agents and Agentic AI as the next big thing. This will sound exciting, yet a little bit confusing, right? Are they same under different titles or do they really mean something different?

This is what I am explaining here. No technical terms, no textbook descriptions and just clear, real-life cases so you can finally get to see what makes Gen AI, AI Agents and Agentic AI special (and how all these relate to each other).

Generative AI

Generative AI is a kind of Artificial Intelligence that can generate the new content. It learns patterns using large volumes of data, either available on the internet or data that we feed it, and then produces text, images, audio, video, or even code.

The Generative AI systems, which are most commonly implemented, are based on the LLM (Large Language Model). The examples are ChatGPT, Gemini, Claude, or Perplexity. These models are trained on massive datasets and provide the answers depending on the information they have learnt.

In its simplest form, generative AI is reactive. It only reacts to your input, It does not think in advance, It does not have any actual memory, unless it is made to store context.

Example: Write emails, summarize documents, generate content, generate images and voiceovers, and many others. When you query a gen AI model by asking, "What is the weather today", it cannot provide the answer unless and until it is linked to the live data.

Advantages of using Gen AI

Content Creation: write articles, social media posts, emails
Automating Repetitive Tasks: Auto-generate product descriptions, summarizing documents, generating templates.
Multimodal Capabilities: Create logos, marketing visuals, generate voiceovers, podcasts
Boosting Productivity: Speed up the creative process, better decision-making

Limitations of using Gen AI

·          Data Cutoff: Gen AI models can be trained until a certain date and they do not understand what has changed in real-time.
No Initiative: They will do nothing without prompting.
Accuracy Issues: Sometimes generate incorrect or hallucinated information.
No Personalization: It forgets past interactions without memory.
No Tool Use: Is unable to check live weather, flight prices, or to carry out transactions that are not connected to external APIs.

Real-World Examples of Gen AI

ChatGPT: Generates human-like text responses.
DALL·E / MidJourney: Creates images from text prompts.
GitHub Copilot: Assists developers by generating code.
Runway: Generates videos and creative media.

Recent Enhancements

  • The most significant change is the introduction of memory. More recent models, such as ChatGPT and other models, are able to both recall your preferences and context throughout communication.
  • Models can now work with large quantities of information simultaneously, think hundreds of pages of documents as opposed to a few pages.

AI Agent

AI Agent is a program that accepts the input, thinks, and performs an action to finish a task with the help of tools, memory, and knowledge. Unlike pure Generative AI, which only reacts with an answer, an AI Agent can actually do something with the information it generates. It is more independent and has some autonomy to make decisions. Usually, AI agents are designed to perform narrow, simple, and specific tasks effectively.

Once you create the LLM for your use case and give it access to external APIs or tools, your LLM is now smart enough to take action. For example, it can call the flight API and fetch the latest price of the ticket.

Let's say your LLM is not able to provide the response for the particular input, it will keep on looking for the external things that will be able to handle this particular case like – what is the weather today? 

Think of it as A personal assistant who doesn’t just tell you “Flights are available” but actually goes and books one for you.

Advantages of AI Agents

Works Automatically: You only need to give it a task and it takes care of all the steps without you having to guide it every time.
Uses Tools and Apps: It is able to integrate with other applications, search the Internet, do data analysis, and manage programs to achieve tasks.
Saves Time: It could be available 24/7, so you do not have to do repetitive or multi-step tasks.
Remembers the Task Context: It maintains a record of its actions while doing the task, and therefore does not lose its way. 

Limitations of AI Agents

Can Be Slow and Costly: Since it involves heavy processing with AI, it may take time and may be expensive to execute.
Makes Mistakes Sometimes: In the case of misunderstanding your purpose or the tool response, it could retrieve the incorrect information such as displaying you the prices of wrong date or leave a part of an answer.
Security Risks: it has access to other tools and data, there is a risk unless it is closely monitored.
Hard to Debug: When things go wrong it may be difficult to tell where and why it went wrong.

Real-World Examples AI Agents

Research Helper: Does the research work online and summarizes it on your behalf.
Data Assistant: It gathers, cleans and examines data automatically.
Travel Serch: Finds you the perfect flight and hotel deals.
Customer Support Agent: Reads the messages of the customers, interprets the problem, and forwards the request to the corresponding department.

Recent Improvements

Teamwork Features: New tools like AutoGen let multiple agents work together like a team.
Better Decision-Making: Agents are getting better at thinking through tasks and checking their own work.
Self-Correction: Some agents can now notice their own mistakes and try to fix them before finishing a task.

Tools for Building AI Agents

 Zapier , N8N, LangChain , AutoGen , CrewAI

Tools AI Agents Can Use

AI agents often rely on other tools to do their jobs. Here are a few examples:

Web Browsers: To look up real-time information (e.g., weather, news, prices).
Calendars: To schedule meetings or check availability.
Databases/APIs: To fetch or update data (e.g., pulling customer info from Salesforce).
Code Interpreters: To run Python scripts for data analysis or file processing.
Email/Chat Apps: To send messages or notifications.

Agentic AI –

The next stage of AI Agents is Agentic AI. Agentic AI is an AI system that can make decisions without human intervention and can take actions on its own to achieve a goal without being told exactly what to do at every step. AI Agents take actions when instructed to do something. Whereas Agentic AI will act on its own, it is independent, thinks in advance, and acts proactively. It is not waiting to be told what to do, it can even know what you need.

In the Agentic AI, Multiple AI Agents will be there, and they will be collaborating with each other to complete the requirement.

Advantages of Agentic AI

Teamwork: Several agents collaborate with each other and each one of them takes a portion of the task.
Greater Accuracy: Agents check the work of each other and minimize mistakes and hallucinations. 
Solves Complex Problems: Excellent in complex, multi-step problems such as making a detailed trip plan including booking the tickets, book hotels and related places to visit at the location or handling large data sets.
Flexibility: The agents are able to focus on various specializations (finance, travel, data) and merge their expertise.

Limitations of Agentic AI

Costly: More computing power and resources are required as many agents are being employed.
Hard to Implement: It is more difficult to design and coordinate a team of agents compared to when using one AI.
Slowness: Overkill, Simple tasks and Simple overweight projects.
Requires Supervision: Human monitoring is required to make sure that the team is working in a proper and safe manner.

Real-World Examples of Agentic AI

Trip Planning: A team of agents works together one finds flights, one books hotels, one creates an itinerary, and another checks for errors or better options.
Business Reports: One agent can gather data for you, another agent will analyze the data, another agent can create visuals, and the last agent can write the summary for you.
Software Development: One agent will write the code, another tests it, and a third reviews it for bugs or improvements.
Customer Support: It can look up your order, issue a refund, schedule a pickup, and confirm everything via email - all autonomously.

Tools for Building Agentic AI

CrewAI: good for creating teams of role-based agents (eg: researcher, writer) that collaborate on complex tasks.
AutoGen: Microsoft's framework for building custom groups of conversational agents that talk to each other to solve problems.
LangChain: Helps developers build multiple applications by connecting LLMs with external data sources and tools, enabling multi-step reasoning.
n8n
: Workflow automation platform that lets you visually connect AI models with business apps (like CRM, email, databases) to create automated agents.

Recent News 

Recently, ChatGPT introduced the ChatGPT Agent Builder, enabling users to create agents.

Happy Exploring! Happy Learning!     

                        You can’t learn everything at once—but you can start with the cloud that opens the most doors.

When I began my cloud journey, I was overwhelmed by the choices. The question everyone kept asking was: Should I learn AWS, Azure, or Google Cloud first? Here’s a little secret: most Cloud Engineers don’t choose their first cloud—it chooses them. The same happened to me. Azure chose me before I even had the chance to choose it.

That’s what inspired me to write this blog: a comprehensive comparison of AWS vs Azure vs GCP. Whether you’re a beginner like I was or just exploring the world of cloud, I hope this guide gives you the clarity and confidence to choose the right path.

Overview of the Big Three Cloud Providers

Amazon Web Services (AWS)
Launched in 2006, AWS is Amazon’s cloud platform and the pioneer in cloud computing. Known for its massive range of services and global reach, it’s widely adopted by startups, enterprises, and governments alike. Major companies using the AWS cloud include 
Netflix, Coca-Cola, Expedia, and Airbnb.

Microsoft Azure
Azure started in 2010 as Microsoft’s answer to the growing cloud market. It integrates smoothly with Microsoft products and is popular with enterprises, especially those already using tools like Windows Server, Active Directory, and Office 365. 
Major companies using the Azure cloud include Starbucks, HSBC, and HP.

Google Cloud Platform (GCP)
GCP launched in 2008 and is built on the same infrastructure Google uses for Search and YouTube. It stands out for its strengths in data analytics, AI/ML, and developer-focused services. 
Major companies using the GCP cloud include Toyota, Spotify, Twitter, and PayPal.

Market Leader & Market Share

AWS – The Cloud Giant

Market Share: 29-30%
Why It Leads: Offers the most services (200+), used by startups, enterprises, and governments worldwide
Popular For: Hosting, databases, AI/ML, and nearly everything else in the cloud

Azure – The Enterprise Favorite

Market Share: 21 - 22% (growing fast!)
Why It Leads: Works perfectly with Microsoft tools like Office 365, Windows Server, and Active Directory
Trusted By: 95% of Fortune 500 companies

GCP – The Tech & AI Expert

Market Share: 11% (but growing quickly)
Why It Leads: Great for data analytics, AI, and modern app development
Trusted By: Vertex AI, Gemini, and Google Kubernetes Engine (GKE)

Global Infrastructure Comparison

When picking up a cloud provider, looking at service availability is crucially important. The number of regions with availability zones can affect application speed. They also affect meeting local data laws, especially when global or sensitive data is used.

As of July 2025, here’s how the big three cloud providers stand:

  • AWS has 37 geographic regions with 117 availability zones. They plan to add 4 more regions and 13 more availability zones in the immediate future. They serve 700+ edge locations, 13 regional edge caches, and offer Government Cloud support for U.S. and China regions.

  • Microsoft Azure runs 64 regions, with 15 under construction. They maintain 126 availability zones with 37 more being built. Microsoft maintains 192 edge locations in global cities, with 4 edge locations in the US government cloud, and offers Government Cloud support for U.S. and China regions.

  • GCP has 42 cloud regions with 6 new ones coming soon. They’ve built 127 zones and 202 edge locations and offer Government Cloud support for U.S. only (no China regions).
Billing in AWS vs Azure vs Google Cloud Platform

AWS: AWS introduced back in 2017 per-second billing, which started with EC2 Linux instances and EBS volumes. Today, it applies for most EC2 instance types (Linux-based), Fargate, EKS, with other services—while charging for EC2 a minimum of 60 seconds.

AZURE: Azure supports per-second billing for Container Instances, AKS, and a few VM types, but most VMs still follow per-minute billing.

GCP: After AWS, Google Cloud Platform (GCP) quickly adopted per-second billing, then offered it on each VM-based instance as it is uniform in applying per-second billing across compute services like Windows and Linux.

AWS vs Azure vs GCP: On-Demand Pricing (Hourly Rates)

General Purpose ( 4 vCPU, 16 GB RAM )


AWS offers the lowest cost (especially with Graviton2 ARM-based chips), while GCP is mid-range and Azure is generally the highest for this category.

Compute-Optimized ( 4 vCPU, 8 GB RAM )


AWS offers the most affordable compute‑optimized option at $0.153/hr, Azure is slightly higher at $0.169/hr. and GCP is more costly at $0.2351/hr but provides double the RAM.

This makes AWS the clear value leader, while GCP offers maximum memory for compute‑optimized workloads.

Cloud Storage Pricing Comparison


All these cloud service providers (AWS, Azure, GCP) compete closely with each other’s and set similar price ranges for storage services. However, Azure being the most cost-effective option. it's important to also look at other cost factors like data transfer or operation charges before making a final choice.

Actual statistics of the history of big 3 cloud providers: -

"AWS's pricing needs a PhD to decode, Azure runs on 'CTRL+ALT+DEL' energy, and GCP is where projects go to quietly disappear."

Why, AWS's pricing needs a PhD to decode -

AWS has 11+ Pricing Models.
Regional Price Variations: Example - m6i.xlarge costs
        $0.192/hr in US East (N. Virginia) vs $0.263/hr in South America (São Paulo) [37% more expensive]
Source: AWS EC2 Pricing
Real-world horror story: A newly founded start-up forgot to check the 100 TB of S3 storage and left it unmonitored until a 6-month-old - $23,000 bill.
Lesson: Always enable S3 Storage Lens.

Why, Azure Runs on 'CTRL+ALT+DEL' Energy -

Portal Crashes: It has suffered 3 major outages in 2024 alone as a result of DNS/TLS.
Source: Azure Status History
Horror Story in Reality: The authentication of a Fortune 500 company through the Azure AD failed worldwide in a 14-hour span during their major selling time.
Lesson: Azure Status should always be used prior to deployments.

Why, GCP Is Where Projects Go to Quietly Disappear (Launch → Ignore → Sunset) -

The Graveyard of Google: 274+ dead products (Google+, Stadia etc.).
Source: Killed By Google
Horror Story in Reality: The job of a data scientist in the ML training course was auto-deleted after they had consumed 72 hours and achieved a progress of 89%.
Lesson: Do NOT use gcloud config set auto-delete NEVER.

My Working Experience –

I personally find Microsoft Azure the most user-friendly. The portal interface is clean, organized, and easy to navigate — which really helps, especially when you're juggling multiple services. With AWS and GCP, I often feel things are a bit more complex. AWS has a massive service catalog that can be overwhelming, and GCP, while cleaner, still takes time to get used to. For me, Azure just makes the overall experience smoother and more intuitive.

Every cloud provider has its strengths:

    •      Azure is great for Microsoft-heavy environments
    •     GCP shines in data and machine learning
    •     AWS is the most widely adopted, with the biggest community


Happy Exploring! Happy Learning!     


When I first started using Databricks, I was completely lost in the world of Delta tools: Delta Lake, Delta Tables, Delta Live Tables (DLT), Delta Engine, Delta Sharing, Delta Transaction Log (DTL), and Delta Merge. I kept wondering, Do I really need all of these? What do they even do? 

If you are in the same Lake, It is easy to feel overwhelmed, but here is the good news: once you break it down, it is not as complicated as it seems. In this Blog, I will walk you through each Delta tool, explain its purpose, and show how it fits in real-world scenarios. I will also share some alternatives to help you make connections.

Think of this blog as a cricket match strategy. Just like every cricket player has a crucial role to play, each tool in the Databricks Delta Ecosystem has its own purpose. From the Captain (Delta Lake) to the Finisher (Delta Merge), I will walk you through how they all come together for a seamless game plan in the world of Data Engineering.

Databricks Delta Ecosystem

Delta Lake – The Captain of the Team

Delta Lake is the primary piece of the Databricks Delta Ecosystem. It is an open-source storage layer designed to make a data lake as reliable as a database by adding the ACID transactions (so data updates are accurate and safe), data version control (to go back in the history and track changes over time), and schema enforcement (to keep our data structured).

Why Delta Lake –

Traditional Data Lakes are excellent for storing large amounts of data, but they won’t guarantee data consistency and reliability for managing updates. Delta Lake resolved these problems, making it ideal for handling both real-time streaming and batch process data. Traditional Data Lakes – Amazon S3, Azure Data Lake Storage Gen 2.

Alternative tools –

If you think, what is the alternative to Delta Lake, you can think of Apache Iceberg or Apache Hudi, As per me Apache Iceberg would be the closest because it supports schema evolution and versioning, but it may lack Delta Lake’s Seamless integration with Spark and Databricks.

Use cases in real-time?

In Finance, banks can use Delta Lake for real-time fraud detection by monitoring transaction patterns, ensuring timely alerts and accurate reporting of suspicious activities.
In Transportation, logistics companies rely on Delta Lake to track shipments and optimize delivery routes, providing up-to-date information for efficient fleet management and customer satisfaction.

Delta Tables – The Opening Batsman

What is Delta Table?

Delta Tables are the core table format on Databricks, built on top of Delta Lake. It combines the structure and querying power of traditional databases with the scalability and flexibility of data lakes, making it easier to store, access, and analyse large amounts of data.

Advantages of Using Delta Tables –

Delta Tables make querying large datasets easy, no matter if you are using SQL or Python. Delta Tables supports MERGE (upsert) and DELETE operations, which is challenging in traditional big data systems. Delta tables can be used for both Batch and Streaming data processing. Delta tables are built on parquet, an open-source columnar storage format.

Alternative tools –

You might think of other table formats like Iceberg for example.

Use cases in real-time?

In Transportation, Delta Tables are the actual data structures where logistics companies store detailed records of shipments, vehicle locations, delivery routes, and timestamps for each transaction.

Delta Live Tables (DLT) – The All-Rounder

What is DLT?

Delta Live Tables (DLT) is a feature in Databricks that easy the process of managing data pipelines, like how Azure Data Factory (ADF) helps automate ETL workflows across multiple data sources. We define the data transformations we want, and DLT automates tasks like scheduling, quality checks, and scaling your operations based on the need, It is helpful especially for real-time or frequently updated data.

Advantages of Using Delta Tables –

Delta Live Tables is useful because it streamlines the process of building and managing data pipelines, reducing the time and effort required to prepare data for analysis. It is ideal if you want to keep your data up to date.

Alternative tools –

I can think of Apache Airflow as pretty close. It is a flexible, open-source platform for orchestrating ETL workflows, but it doesn't handle real-time streaming data as smoothly as DLT. At the same time, DLT does not support as wide a range of transformation languages or custom logic as Apache Airflow does.

Challenges –

  • Delta Live Tables (DLT) do not support time travel capabilities.
  • DLT is a proprietary feature, it is only available within the Databricks ecosystem and cannot be easily used outside of Databricks.

Use cases in real-time?

In Transportation, logistics companies can use DLT to automate the processing and transformation of real-time shipment data, continuously updating delivery statuses, tracking vehicle locations, and optimizing delivery routes as new data flows in. DLT ensures that data is always up to date, providing real-time insights for timely decisions for efficient delivery operations.

Delta Engine – The Fast Bowler

Delta Engine is the key part of Databricks Delta Lake Ecosystem, which keeps everyone's eyes on the Databricks. Delta Engine is an optimized query engine for speeding up SQL and DataFrame operations on Delta Lake. This delta engine is created to handle large datasets and complex queries more efficient way, so you can get the insights faster and smoothly.  

Advantages of Using Delta Engine –

Delta Engine boosts performance, making it simple to work with huge datasets, whether it is terabytes or petabytes of data. Delta Engine is built to handle complex queries efficiently, so you can perform real-time analytics.

Alternative tools –

In a similar way, Azure Synapse Analytics allows you to query data stored in Azure Data Lake using its SQL engine, although it lacks the deep integration with Apache Spark that Delta Engine offers.

Challenges –

  • Limited to Databricks only
  • Can be expensive

Use cases in real-time?

You can consider this in all sectors where you are looking for real-time insights.

Delta Sharing – The Team Player 

Delta Sharing is the feature of Databricks to share the data securely across different platforms. It is built on top of the delta lake and lets you share the data across organizations, teams, or platforms in an open, standardized way.

Advantages of Using Delta Engine –

With Delta Sharing, there is no need to physically move or replicate data to share real-time and updated information. Rather than transferring datasets to different systems, other organizations or teams can retrieve the data directly, which makes the sharing process faster and much more efficient, and no vendor lock-in.

Alternative tools –

Snowflake Data Sharing, Google Big Query Data Sharing, and Amazon Redshift Spectrum all offer data sharing, but in my opinion, Snowflake Data Sharing is close as it is secure, real-time, and allows cross-organization sharing.

Use cases in real-time?

In Transportation, Delta Sharing can help logistics companies easily and securely share real-time shipment details, like delivery status and location, with partners such as customs or third-party warehouses.

Delta Transaction Log – The Umpire 

The Delta Transaction Log keeps track of every change we made to data in Delta Lake. With this, your data is always up to date with each operation you perform. ACID compliance and time travel are possible because of DTL.

Advantages of Using DTL –

Delta logs are essential to track each operation we performed in Delta Lake (inserts, updates, and deletes) with time.

Alternative tools –

We can consider Apache Iceberg and Apache Hudi for this.

Use cases in real-time?

In a Bank’s Transaction System, Delta Transaction Logs can be used to track every change in customer account balances. For Example, when a deposit or withdrawal is made, the transaction log captures the change, ensuring accurate and consistent updates across all systems.
In Transport, for a logistics company, Delta Transaction Logs can be used to track changes in shipment statuses. When a package is loaded, in transit, or delivered, the log captures these events, helping to maintain accurate tracking records.

Delta Merge – The Finisher 

In Delta Lake, Merge is a useful operation that lets you update or add new data in a single step. It is like saying, If the data already exists, update it. If not, add it. We call it Upserts ( Update + Insert ). This helps to keep your tables up to date. It is often called " The Finisher " because it finalizes updates in a clean and easy way.

Advantages of Using Delta Merge –

Delta Merge simplifies the data updates by allowing inserts, updates, and deletes in one operation, it improves the overall efficiency and ensures data consistency before and after.

Alternative tools –

Apache Iceberg and Apache Hudi both work like Delta Merge, but in my opinion, Apache Hudi is closer.

Use cases in real-time?

In Transportation, Delta Merge can help logistics companies efficiently sync data across multiple systems. For example, when shipment details like delivery addresses, status, or transit time change—Delta Merge can update the system in real-time without any conflicts.


Happy Exploring! Happy Learning!      

Older Posts Home

ABOUT ME

Kartik Mod

The Kartik Voice

A curious mind navigating the world of AI, Cloud, and Data Engineering.

This blog is my digital notebook — where insights evolve into tutorials, ideas become action, and stories power progress. Let's decode the future, one post at a time.

SUBSCRIBE & FOLLOW

POPULAR POSTS

  • Azure vs AWS vs GCP: Key Differences You Must Know
  • Gen AI vs AI Agents vs Agentic AI: A Simple Guide
  • Captain to Finisher: The Key Players in Databricks Delta’s Ecosystem
  • DeepSeek OCR: Beyond Just Text Recognition
  • Databricks vs Fabric vs Snowflake : The Battle for the Cloud Data Crown
AI Use Cases
Cloud Essentials

Categories

  • Agent 1
  • Agentic AI 1
  • AI Agent 1
  • AI Memory 1
  • Artificial Intelligence (AI) 3
  • AWS 1
  • AWS vs Azure vs GCP 1
  • Azure Synapse Analytics 1
  • Big Data 5
  • Big Data Management 1
  • Business 1
  • Business Intelligence 1
  • ChatGPT 2
  • Cloud Computing 5
  • Cloud Cost Optimization 1
  • Cloud Outages 1
  • Cloud Pricing Comparison 1
  • Cloud Storage 1
  • COC 1
  • Compute 1
  • Context Window 1
  • Data Engineering 3
  • Data Governance 1
  • Data Lake 1
  • Data Management 2
  • Data Platforms Comparison 1
  • Database 1
  • Databricks 1
  • Databricks Delta Ecosystem 1
  • DeepSeek 1
  • Delta Engine 1
  • Delta Lake 1
  • Delta Live Tables 1
  • Delta Sharing 1
  • Delta Tables 1
  • DevOps 1
  • GCP 1
  • Gemini 3
  • Generative AI 1
  • Google 2
  • Horror Story 1
  • Lakehouse 1
  • LLM 2
  • Machine Learning (ML) 1
  • Machine Learning(ML) 1
  • Microsoft Azure 3
  • Microsoft Fabric 2
  • MinIO 1
  • Object Storage 1
  • OCR 1
  • Open Source 1
  • Power BI 1
  • Python 1
  • Real-Time Analytics 1
  • S3 Compatible 1
  • S3 Storage 1
  • Snowflake 1
  • SQL 1
  • Storage Solutions 1
  • Tech Industry Insights 2
  • Vision Token 1

Advertisement

Contact Form

Name

Email *

Message *

Search This Blog

Powered by Blogger
My photo
Kartik Mod
Hyderabad, Telangana, India
Azure Data Engineer
Visit profile
Find me on LinkedIn
LinkedIn KARTIKMOD

Not Just Logs — Stories from a Data Engineer

The Kartik Voice Logo