Teaching My MBA Manager The Hidden Power Of Data Structures And Algorithms In a VibeCoding Era

Photo by Maxim Hopman on Unsplash

A few weeks ago, my MBA manager asked me a simple but tricky question:

“Why do we still test people on Data Structures and Algorithms (DSA) in interviews? Can’t ChatGPT and modern tools just handle that?”

It’s a fair question; in this era of AI coding assistants, why bother with the theoretical topics? So, instead of a long theoretical explanation, I gave a simple business-driven story.

The Scenario

We are a SaaS company, and our software is used by other companies and one of its features is the ability to search transactions for their customers.

So imagine two companies using our software as shown in Table 1.

CompanyDescriptionCustomers
Jack LogisticsA regional logistics company100
Kim Investment BankA large multinational bank100,000
Table 1. Our current users

Both operate across Europe. Analysts from both companies have identified key customers and shared their names with various departments — marketing, sales, and management.

Managers in each company use dashboards to search for customer names inorder to review past transactions and suggest customised services.

This is where the search algorithms quietly step in.

Behind the Scenes — Scenario 1: The Linear Search

When a manager searches for a customer, the algorithm retrieves all customers belonging to that company, creates an alphabetically sorted list, and then goes through the listed names one by one from start to finish until it finds a match.

In the worst case, if the name is at the end of the list (eg Zhang), the software must check every single name before returning a result.

CompanyCustomersSearch Type(Best Case)Time to Find “Amir”(Worst Case)Time to Find “Zhang”
Jack Logistics100Linear1 ms10 ms
Kim Investment Bank100,000Linear1 ms10,000 ms
Table 2. Time to find a customer at the beginning or end of a list

From table 2, the time it takes to find a user at the end of the list is directly proportional to the number of customers in that list, ie, the more customers, the longer it takes to find the user. That’s why it’s taking Kim Investment Bank a long time (10,000ms) to find a customer at the end of the list, compared to Jack Logistics. 

Financial Impact of Scenario 1

If run our SaaS in the Amazon Web Services (AWS) cloud, and the search feature uses AWS Lamda. AWS Lambda charges $0.00001667 per GB-second, with 1 GB of memory allocated by default. If our function uses 128 MB of memory (a common setup for lightweight APIs), that’s approximately $0.00000000208 per millisecond. That’s a small amount, let’s expound as shown in table 3.

CompanyTime per “worst case” SearchCost per QueryManagersSearches per DayDaily CostMonthly Cost of operation (~30 days)
Jack Logistics10 ms$0.0000000208550$0.0000052$0.00016
Kim Investment Bank10,000 ms$0.000020820100$0.0416$1.25
Table 3. Cost of running the search feature with the linear search algorithm.

From table 3, if our SaaS application has two customers, it would cost us about 2 dollars to serve them. These numbers seem tiny until you multiply them by thousands of Lambda calls across hundreds of endpoints per minute. Scaling inefficiency adds up fast.

When the Business Grows

As the economy improves, both companies expand into America and Asia and both experience a 10× growth in customer base as shown in table 4.

CompanyOld CustomersNew Customers
Jack Logistics100100,000
Kim Investment Bank100,0001,000,000
Table 4. Our SaaS clients experience a 10x growth in their respective customer bases.

If our software is still using the same linear search, the time and cost grow linearly with data as shown in table 5.

CompanyTime for worst case scenario searchApproximate AWS Lambda Cost per QueryMonthly cost of operation (~ 30days)
Jack Logistics100 ms$0.000000208$0.000052
Kim Investment Bank100,000 ms$0.0208$41.6
Table 5. Increase in cost is proportional to the customers.

From table 5, with the number of customers increasing for our clients, it now costs us 42 dollars to offer the search feature to the 2 customers. Most importantly, however, it now takes 100 seconds per search for Kim Investment Bank; this is completely impractical for real-time dashboards, and a very poor user experience for them.

This, of course, prompts “your system is very slow” complaints from our users at Kim Investment Bank, and since we are a customer-focused company, the manager naturally asks:

Can the engineers make it faster at a lower cost of operation?” Yes, with better algorithms.

Scenario 2: Using Binary Search Algorthim

The engineering team redesigns the search algorithm. Instead of checking every name one by one, the algorithm now:

  1. Sorts the list of names.
  2. Splits it [the list sorted above] in half.
  3. Determines which half contains the target name.
  4. Discards the irrelevant half.
  5. Repeats steps 2 to 4 until it finds the target name.

Each step eliminates half of the data, drastically reducing the number of comparisons. This is called Binary Search. Even with a 10x growth in customer base for our clients, there’s a significant reduction in time to find the target. For example, for Kim Bank, it’s from 100,000ms to 20ms, and Jack logistics from 100ms to 17ms, as shown in table 6.

CompanyCustomersSteps (log₂n)Approximate TimeLambda Cost per Query
Jack Logistics100,000~1717 ms$0.0000000354
Kim Investment Bank1,000,000~2020 ms$0.0000000416
Table 6. Worst case time scenario for binary search.

Performance and Cost Comparison

MetricLinear SearchBinary SearchImprovement
Jack Logistics — Time per Query100 ms17 ms≈ 6× faster
Kim Investment Bank — Time per Query100,000 ms20 ms≈ 5,000× faster
Jack Logistics — Cost per Query$0.000000208$0.0000000354≈ 6× cheaper
Kim Investment Bank — Cost per Query$0.0208$0.0000000416≈ 500,000× cheaper
Table 7. Cost and performance comparison between linear and binary search.

From the observations in table 7, a change in the algorithm type has simultaneously improved the cost of operation of our search feature and user experience. Instead of 100,000 checks, we do about 20, that’s a 5,000 times performance gain, and the cost reduction is equally dramatic, from a total of 42 dollars to less than a dollar to operate our search feature. 

Conclusion

​​In this article, I explained Data Structures and Algorithms to my MBA manager using two simple but powerful examples: linear search and binary search. By testing both the execution speed and the financial impact (AWS cloud cost) of each approach, we saw how algorithm choice directly affects system performance and operational cost.

In Scenario 1, linear search showed that both performance and cost increase proportionally with data growth. As the dataset becomes larger, the time required to process a request and the cloud expenditure increase at the same rate. This makes the search feature increasingly inefficient and expensive.

On the other hand, in Scenario 2, binary search demonstrated how a smarter algorithm can drastically change results. It reduces the search space by half in each step, its logarithmic time complexity delivered improvements of several orders of magnitude. Processing 1,000,000 customer records dropped from 100,000 milliseconds to 20 milliseconds, and the AWS cost dropped from 42 dollars to less than 1 dollar for the same workload.

Even in the age of ChatGPT, AI copilots, and quick “vibe coding,” understanding how algorithms behave as data increases is still essential. It is not only about passing interviews. It is about building systems that scale well, reducing daily operational costs such as cloud bills, and giving users a faster and smoother experience.

For all these reasons, I told my manager that we should keep DSA questions in our interviews, as in modern software development, efficiency is not just a technical matter. It is also a financial one.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *