#2. Cloud Computing Fundamentals: Understanding Compute, Storage, and Memory
Cloud Computing Fundamentals: Understanding Compute, Storage, and Memory
If you remove all the buzzwords, logos, and dashboards from cloud computing, what remains?
At its core, every cloud system (whether it runs a website, a database, an ERP like Dynamics 365, a VR application, or an AI model) is built on just three fundamental building blocks:
Compute. Memory. Storage.
Understanding these three is the difference between someone who uses the cloud and someone who engineers it.
Let me ask you something first:
Have you ever wondered why an application runs fast in one environment and painfully slow in another; even though both are “in the cloud”?
The answer almost always comes back to how compute, memory, and storage were chosen and configured.
1. Compute - The Brain of the Cloud
Compute is the processing power of your system.
It is what executes code, runs applications, processes API requests, performs calculations, and drives everything forward.
When you select a virtual machine in Azure or an EC2 instance in AWS, what you are really choosing is:
-
How many CPUs it gets
-
How fast those CPUs can work
-
Whether it gets access to GPUs
More compute means more work can be done at the same time. But here’s the mistake many beginners make: more compute doesn’t always mean better performance.
A lightly used web server might run perfectly on a small CPU, while a reporting system or an AI workload may require massive compute power to crunch data.
Cloud engineers don’t ask, “What’s the biggest machine I can get?”
They ask, “What type of processing does this workload actually need?”
2. Memory - The Working Space
Memory (RAM) is where applications keep what they are actively working on.
If compute is the brain, memory is the desk the brain works on.
A small desk means the brain has to keep moving things in and out, slowing everything down. A large desk allows more things to stay in reach, making work faster and smoother.
This is why:
-
Databases need large memory
-
ERP systems like Dynamics 365 need stable memory
-
AI and VR applications consume massive amounts of RAM
When an application runs out of memory, it doesn’t slow down; it crashes.
One of the most common real-world cloud failures happens when systems have enough CPU but not enough memory. Monitoring dashboards might show low CPU usage, yet users complain the system is unstable.
That is a memory problem, not a compute problem.
3. Storage - Where Everything Lives
Storage is where data is saved when it is not actively being processed.
This includes:
-
Website files
-
Database data
-
Backups
-
Logs
-
Images, videos, and VR assets
Not all storage is the same.
Some storage is designed for:
-
Speed
-
Some for durability
-
Some for low cost
A cloud engineer chooses storage based on what the data is used for. A live database needs fast, reliable storage. Backups need cheap, long-term storage. Analytics systems need storage that can read massive volumes quickly.
Using the wrong storage type can quietly destroy performance; or budgets.
Why These Three Decide Everything
Whether you are running:
-
A WordPress website
-
MongoDB Atlas
-
Snowflake
-
Microsoft Dynamics 365
-
A Meta Quest VR app
-
Or an AI model
They are all just different ways of consuming compute, memory, and storage.
The cloud does not change physics; it only gives you more flexible ways to rent these resources.
When a system feels slow, expensive, or unreliable, the real question is always:
Were compute, memory, and storage chosen correctly for this workload?
A Question for You
As you read this, think about:
-
Which workloads do you work with today?
-
Are they compute-heavy, memory-heavy, or storage-heavy?
-
If you had to redesign them, would you make different choices?
In the next article, we’ll take these fundamentals and see how cloud providers like AWS and Azure package them into real services; and how that affects performance, cost, and control.
Welcome to thinking like a Cloud Engineer 🚀
Comments
Post a Comment