CodeSparks - May Edition: Critical Look at MCP, gRPC Mocking, and the Prompt Problem
Your monthly dose of programming insights, engineering practices, and tech culture.
CodeSparks - May 2025 Edition
📣 Announcements
Hey there! Smaller edition this month but packed with some thoughts that have been on my mind. MCP is still very hyped, the author of one of the articles from this edition questions design decisions of the protocol critically. Also exploring how people take shortcuts with prompts instead of learning fundamentals, which is a problematic trend not only for software development.
Plus some practical stuff: gRPC mocking (WireMock does not only support HTTP!), using tests as debugging tools, and a couple of neat tools I stumbled upon.
💻 Programming
Java Unit Testing: Tests as Debugging Tools
This piece shows something I think many of us forget - tests aren't just about verification, they're powerful debugging tools for logic errors. When you write a test that isolates specific behavior, you're essentially creating a controlled environment to understand what your code actually does versus what you think it does. The mindset here is interesting and aligns well with test-driven development.gRPC Mocking with WireMock
Most people think WireMock is just for HTTP endpoints, but this article shows how to use it for gRPC services too. The key insight is that gRPC can run over HTTP/2, so WireMock can intercept and mock these calls just like regular REST APIs.This is particularly useful when you're working with microservices that communicate via gRPC - you can mock dependencies without having to spin up actual services or write complex test doubles. The article walks through practical examples of setting up stubs, handling different response scenarios, and even simulating network failures.
What makes this powerful is that you keep the same testing approach across your HTTP and gRPC services, instead of learning different mocking strategies for each protocol.
⚙️ Engineering Practices
The Prompt Problem: Taking the Easy Route
Clayton Ramsey's piece on prompt engineering hits something that's been bothering me lately. People are treating LLMs like magic boxes - throw in a prompt, get code out, ship it. But this approach misses the fundamental understanding of what you're actually building.
The problem isn't using AI tools - it's using them as a replacement for learning instead of a supportive measure during the process. When you don't understand the basics of what you're asking the AI to generate, you can't evaluate if the output is good, maintainable, or even correct.
👷🏻♀️ Architecture
Raz's critical analysis of MCP raises some important questions about the design decisions behind this protocol. While MCP has generated a lot of excitement for connecting AI models to external tools and data sources, the implementation choices are questionable.
The article points out issues with the protocol's complexity, security model, and whether it's solving the right problems in the right way.
🔗 Useful Resources
Stirling PDF A comprehensive PDF toolkit that runs locally. What I like about this tool is how it handles the common PDF manipulation tasks without sending your documents to external services.
Plain Vanilla Web A reminder that you don't always need complex frameworks to build good web experiences. Sometimes the simplest approach is the most maintainable one.
What do you think about the prompt engineering trend? Are we building better developers or creating a dependency?