My focus is on making software, coaching Agile/Scrum teams, and being a husband and father--definitely not on blogging--so please do not expect very frequent posts here.

Tuesday, October 21, 2025

Don't complain about your stakeholders; instead, collaborate, then build trust

Sales might say "Don't bother us in sales, we're too busy selling!", which makes it hard to build software that meets their needs. In manufacturing, the attitude was the opposite: "Please work with us to make the line run better!", which makes it hard to prioritize and triage all the incoming requests. In both cases, getting into a truly collaborative relationship with stakeholders and key users, then building trust, helps more than anything to cut through the chaff inherent to the "feast or starve" alternatives.

Monday, November 25, 2024

The way to do a no-op (do nothing) action in GitHub Actions is `run: exit 0`

You'd think echo or : would work, but echo results in extra unexpected cruft, and : isn't recognized as a command, presumably due to the YAML interpreter.  Hope this helps!

Saturday, August 17, 2024

Readability Above All Else in Code Reviews

In my last post, I made a case that code reviews cannot be relied upon as the main activity to achieve defect prevention, correctness, robustness, usability, or learning. Even so, these goals are worth pursuing in code reviews, though other practices such as unit testing and QA should be the primary driver to such goals. Aside from those goals, code reviews can indeed be the primary driver of other areas of improvement for a software team. It is arguable (comments welcome), but I say above all else code reviews routinely and consistently improve code readability. Highly readable code might not be the first measure of software quality or of team competency that you'd list, but it should be. Code is typically written once and revised a few times, but read dozens of times. Programs are for humans to read and only incidentally for computers to execute. I or any competent developer can deal with any code that we can understand, and the quicker we can understand it, the faster it will be to troubleshoot, maintain, and enhance. Readable code gives us more than a fighting chance. Too often, when teams are under the gun and avoiding spending any time or effort into code review, we get indecipherable, weird, arcane, and confusing code, and this incurs troubleshooting, maintenance, and enhancement costs to go way up in the future, and in unpredictable ways.

The most important imperative in generating highly-readable code, I say, is: write your code so that the reader can immediately and intuitively discern what it actually does. Don't make me think too hard. Make the code obvious. As a reviewer, you can make this your top priority when recommending changes to the code under review. A reviewer ought to first point out where she or he doesn't understand what the code is doing and try to collaborate with the author to make it obvious. As an author, when a reviewer doesn't immediately grok what you wrote, your first thought shouldn't be "Okay, let me explain it to him" and certainly not "I should probably add some comments that explain this". Instead, think "If he didn't get it, others won't get it either. What code should change to make this more obvious?"

More often, the reviewer already understands what the code is doing but quickly thinks of a better way to express it--"better" here being synonymous with "more readable". Code review is the only practice I know of that presses code into a more readable state routinely and reliably. These can often be the quick wins: the reviewer says "try this instead", and very often the author can quickly agree and integrate such a change. As an author, I am very apt to take my coworker's suggestions, certainly when I am ambivalent and even when I mildly disagree. The rapport and mutuality gained from adopting a reviewer's recommendation in code review might be more valuable than the change itself. And several times I have looked back on such changes after several months and been convinced in the end--turns out my teammates are really quite good at this programming stuff!

We should all write our code so that the mapping from the code itself to our intent is as simple and as straightforward as possible. For example, if I intend for a method to calculate and then save, naming it CalculateAndSave is better than naming it Save. In a code review, pointing out "this is doing more than saving" will at least result in a renaming of the method. Then, that name then might lead us to question if the calculation should be done in a separate method for better separation of concerns. Conversely, if I named a method ValidateAndSave but there is no validation code in the method at all, my teammate would immediately point out that I intended to validate but never actually did so. Pointing out the differences between apparent intent (or mutually-understood requirements) and actual code is a top benefit of doing code reviews.

So what makes code highly readable?  Sometimes "it depends" (of course). Sometimes, shorter code is more readable (e.g. omitting redundancies), and sometimes longer code is more readable (such as declaring a semantically meaningful temporary reference). Sometimes clever code is more readable (i.e. when your cleverness came up with the apt statement; please not when cleverness is needed to understand it); sometimes "dumb" code is more readable (especially when familiar and unastonishing). Sometimes more abstraction (more layers) is more readable; other times, collapsing layers makes it more readable. When these questions come up in code, the important thing is that the team is deciding together which way to go in each case. If the team is skilled and genuinely cares, I trust them to make a decision, even if I disagree with that decision. The difference between my idea of good code and my team's idea is usually tiny compared to code that isn't reviewed for readability at all.

However, there are some good readable-code rules-of-thumb that are almost always true. Simple beats complex. Boring beats exciting. Code written as others expect beats astonishing code. Conventional code beats innovative code. Pattern-following code beats trailblazing code. Linear code beats nested code, and shallower nesting beats deeper nesting. More short methods are better than fewer long methods. I think we all appreciate simple, boring, as-expected, conventional, patterned, linear, shorter code--so as a code reviewer, be eager to give your recommendations to make it such.

And there is one more rule-of-thumb that I hold to be the most important: refactored code beats original code every time. Code can almost always be made more readable by being revised, especially when there is no pressure to change its behavior. This is exactly where team peer code review shines: It allows for code to be refactored mere minutes or hours from when it was originally written, with zero added cost of extra testing, when you are most familiar with both the desired and actual behavior of the code in question, when such refactoring is typically least costly and most welcome by all concerned parties. It is the biggest no-brainer when it comes to code review: seize the opportunity to refactor your code to be more readable, and leverage code reviews to spur changes for readability.

Of course, even if all that is agreed among team member, opinions will still differ as to what is easiest to read. One thinks nested ternary conditionals are easier than using elseif; their teammates think they're nuts. Some like indenting via tabs, some like spaces. When these differences are aired in a team peer code review, a quick resolution is likely. If the teams uphold the value of others' work and want to come to a consensus, a disagreement in a code review will result in the team either proposing a common practice that all can and should agree on, or it will result in the team agreeing that either way is fine and toleration of unimportant inconsistencies (if any). The important concern is that the team be empowered to consider and decide themselves, and that any team member can speak up to raise the issue (both within the code review and in initiating a broader team discussion).

Making the code more obvious and making the code reflect the intent of the author will result in big long-term wins for the product and for the team. Something curious happens once a team gets a little experience at this: they inevitably start forming team norms that lead to more efficient and productive coding. Often it also leads to agreed coding standards that take further time-consuming discussions (or arguments, or fights) out-of-play. These benefits result in snowballing productivity and team happiness as well, so I'll explore that in my next post. I've only scratched the surface on some other benefits of code review, which will also be coming up. 

Do you have a piece of code you'd like to make more readable?  Post it here (maybe a link to a GitHub PR or commit?) and I'll give it a crack.

Thursday, August 15, 2024

Code Reviews Are Overrated (but not how you think!)

When I first began leading a "two-pizza" software team in 2007, our code reviews focused on code robustness and correctness. Our architect would choose code to print out--usually around 4-6 pages' worth each week. We would gather in a conference room and spend an hour scrutinizing, and ultimately arguing, about what in that code should change. A few months into the job, he turned over that duty to me and I knew both the printed approach and the goals we were pursuing had to shift! Since that time, tools have greatly improved, code review practices have become easier and more widely known, and the way we structure code is less novel and more relatable. On top of these improvements, developers expect to review code more commonly and with a much healthier attitude; the benefits are more intuitive to all. However, after doing code reviews on most days for the last 17 years, I have come to see that some of the supposed benefits of code review are overrated. Let's acknowledge the limitations of code review toward those goals and consider other goals for which code reviews might be the primary means to gain.

Most importantly, code reviews are not great at finding defects (bugs) in the code. I hesitate to even mention this first: indeed, my teams have used code reviews to find and fix thousands of defects in our code over the years--perhaps making code reviews worthwhile for this benefit alone. Furthermore, bugs found in code review can be fixed much more easily and cheaply than those found in any round of testing, or in the worst case, found by actual users in production. Nevertheless, if we relied on code review as the main way to prevent defects, our quality would dramatically drop. Most defects are simply not obvious enough in code reviews. We need testing in all its forms. We need mutual understanding of requirements, use cases, and/or user stories. We need collaboration and an ongoing conversation about what our software should do and what our users' experience should be. We need an overall team culture growing in software quality and technical excellence. There is not one way to prevent defects; we need all the ways. Certainly code reviews are one tool for higher quality software, but the answer to "why did this bug make it into production?" never comes back as "Eric reviewed the code poorly."

Now let me restate that overrated does not mean nonexistent; let me repeat that code reviews do often find bugs; they just aren't great at doing so. Much as a .300 hitter (that is, a 30% successful hitter) in baseball is an all-star, the value in preventing defects via code review is large despite its failure to detect them more often than not. Keep looking for bugs in your code review effort--just don't expect to find most of them that way.

A concept related to defect prevention is correctness. Does the code do what the author intended it to do? Code reviews are overrated in revealing incorrect code. After all, unless the review somehow expresses what the author intended separately from the code that implements it, the reviewer cannot tell where the intent of the code differs from the code. Reviewers often proceed assuming the best intentions of the author and also assume that the code written accurately reflects those intentions. If a reviewer encounters code they didn't expect, he might think "well, she thought this through and I haven't. She must be right." The wise reviewer will ask questions instead of assuming the best and use the review as a jumping-off point for discussion and collaboration, but no one will be able to question enough or in the right places all the time. Fortunately, there is a proven practice that is explicitly and mainly about checking the code written against the author's intention: unit tests are exactly what we need, especially when they are written side-by-side with the implementation code, either immediately after ("code-then-test") or before (e.g. via TDD). If you aren't using unit testing and instead relying on code reviews to ensure correctness, a lot of misunderstandings and mistakes will slip through to QA or beyond.

The flip side of defect prevention is the completeness and robustness of the code. Does the code do all that it should? Again, code review only examines what the code does. Code review isn't so good at flushing out what the code isn't doing but should be. A really good reviewer will have a strong understanding of the intent and end goals of the code under review and thus can and should question the author on what might be missing. However, sometimes the author is already the person who knows best about what code should be doing, and other reviewers might assume too much given the code they see. Again, use the review as a jumping-off point for discussion and collaboration. But even this healthy attitude will only get you so far. Code reviews are no substitute for a team-wide mutual understanding of the both the user's needs and the big-picture direction of the software effort. It takes many other practices to build the product right; these include product planning time, technical design meetings, pairing, prototyping, and frequent review/discussion with the product owner and users.

Code reviews aren't good at revealing UI/UX shortcomings because code doesn't show the interface or how users interact with it. Testing, especially user-acceptance testing, of any user interface cannot be replaced by code review. A team must engage in UI/UX reviews, both among the team and with their users or user representatives. However, I do highly recommend expanding the idea of "code" reviews to more than code: include mockup/prototype reviews or even reviews of nearly-finished screenshots. Reviewing a screenshot, a UI mockup image, or a photo of a whiteboard is far more valuable than merely reviewing the code that might generate such an interface.

Technical non-coders can't rely on code review to stay in the know. Architects and tech leads who require that all code changes be reviewed by them do their teams a disservice. They are hindering all three of Pink's factors of job happiness: they are blocking their team member's autonomy, preventing them from attaining mastery, and de-emphasizing the purpose of building the software product. At best, they make themselves a necessary impediment to getting the job done. At worst they become the lowest level of rubber-stamping bureaucrats. Instead, architects and tech leads should be sure they are occasionally authoring code just as they did as senior developers, and they should sacrifice time doing code review if necessary to achieve it. By all means architects and tech leads should participate in code reviews (and their recommendations should be valued by all), but the team should be primarily responsible for reviewing each other as peers. The team should be empowered to approve and complete code reviews quickly and to do whatever necessary to prevent code reviews from becoming any kind of delay.

Finally, code review is a poor substitute for cross-training, mutual team learning, and team-wide ownership of the requirements, code, and software deliverables. Code reviews are just too narrow to be our main teaching tool. Because teams that do code reviews well will tend to shorter reviews in higher quantity, any given code review will not expose enough of the software to generate the kind of conversations needed to genuinely spread knowledge around. We can hope that code reviews will help spur on broader conversations about our software, but we cannot depend on them as the main tool for doing so. Furthermore, the deep knowledge needed to grow as a team can only come from implementing or improving that software. Reviewing does not make the reviewer as knowledgeable as its author. Planning, team organization, daily scrums, pairing, and swarming can all help with this. These practices and more will automatically happen the more your team takes on a mindset of collective code ownership.

Thus are you discouraged about the benefits of code review? Don't be. The supposed benefits above can still be had, just not to the extent that you might have hoped for in code reviews We can gain some defect prevention, correctness, robustness, usability, and learning from code reviews; we simply cannot rely mainly on code reviews for these goals. Supplement code reviews with other practices tailored to these goals.

But I haven't yet written of the underrated first-class benefits of code review. Turns out there are many gains for which we can rely on code review--and especially on team peer code review. Compared with my team in 2007, today we review lots more code more efficiently and with different goals. I'll explain a few in my next posts!  Till then, let me know how I can help with your team's code reviews or perhaps review your code myself as freelance side work.

Thursday, April 18, 2024

Full-stack? How about "Full Scope"

Hi, I’m Patrick Szalapski.

I make software and help others to make software.

I lead teams to make the most optimally-scoped business web apps and tools, often with .NET, growing such teams to deliver faster, better, and less expensively. I have a proven track record at multiple companies saving businesses millions of dollars and thousands of hours. I can lead or support the effort to figure out what you need, partnering with stakeholders to attack problems and quickly move to building solutions. My teams quickly grow to be highly productive, technically excellent, self-organizing, and happy.

Best of all, I pivot among these roles to whatever a team most needs for sustainable success. 

Wednesday, June 22, 2022

Catch me presenting on Blazor at Minnesota Developers Conference 2022 on June 22 in Minneapolis

I'll be presenting at the Minnesota Developers Conference, June 22, 2022 in Minneapolis on Practical Blazor.  I hope you can come and introduce yourself!

Also presenting will be some of my favorites, including Kamran Ayub with a fantastic approach to personal finance, Erik Onarheim on building a game engine in JavaScript, and Robert Boedigheimer on Regular Expressions in .NET.    So good to finally get back to an in-person conference!

EDIT: That's today!  Here's my slide deck for reference.

Friday, March 25, 2022

Avoiding unnecessary rendering in Blazor

In my previous post, I laid out the fundamentals of what can cause a Blazor site to be slow due to re-rendering. By far the most common cause of such slowness is when hundreds or thousands of components all need to re-render multiple times based on changes to parameters or explicit calls to StateHasChanged(). Often it is the case that only a few of the component instances in question truly need to re-render, but Blazor has no way to avoid such re-rendering, so it is up to the developer to worry about this concern whenever dealing with a proliferation of component instances.

The parent component

This potential slowness is often manifest in lists, grids, or tables where the content of each item is non-trivial. Imagine a UI with a grid of product images: a ProductGrid.razor something like…

<button @onclick="Add">Add new to top</button>
<ul>
  <Virtualize Context="productViewModel" Items="Products">
    <ProductTile Product="productViewModel" @key="productViewModel.Code" />   
  </Virtualize>
</ul>
@code {
    [Parameter, EditorRequired]
    public List<ProductViewModel> Products { get; set; } = null!;
    private void Add() => Products.Insert(0, new());
}

Note here we have used the Virtualize component to avoid rendering components for items in the List that aren’t currently shown (due to scrolling). We are sure also to use a @keyattribute on the root child element or component inside Virtualize.

So this parent component seems pretty simple and already optimized using Virtualize; how could it be a problem?

Narrow edits can cause wide re-renders

Note there is a button to add a new empty product. When this button is clicked, Blazor re-renders the home component, which means every child component will re-render as well. In our case, this probably means our ProductGrid and the 50+ shown ProductTiles, and suppose each of which has a dozen small components it shows and thus has to rerender. So that’s easily 770+ rerenders needed just to show the new item.
Ideally, when clicking the add button, there would only be a re-render of the ProductGrid, exactly one ProductTile (the new one), and all the small components (but only those in that one instance): thus 14 re-renders.

The reason each of the ProductTiles has to rerender is that its parent is rerendering, and it takes a ProductTileViewModel (one of our domain classes) as a parameter. Blazor cannot determine that nothing has changed in the fifty shown products, as it cannot interrogate all the contents of ProductTileViewModel and compare them to the previous value–remember, Blazor automatically avoids rerendering based on values only when all parameters are of certain well-known immutable or primitive types.

The main way to address this in the ProductTile component is to override OnParametersSet to detect to store the previous values for every part of the parameter that affects the displayed state, and override ShouldRender on the ProductTile component so that it will return false when the previous values are all the same as the current values.

But such logic comparing “previous to current” would have to be repeated in every component, yet customized for each one to ensure you are tracking all the properties you care about. I’ll avoid showing you the example code for this since it is not the way I recommend doing it.

Avoiding the re-renders

So here’s where my Nuget package Sz.BlazorRenderReducers comes into play. Using its DisplayHashRerenderComponentBase and implementing the protected override string DisplayHash get-only property, I can simply return a string that is unique for each display state that is possible; usually, this is just going to be a string that contains the value of every property that affects the display state in any way.

This has the benefit of reducing the code required to finely control re-renderings to two lines of code in most cases: one line to inherit from DisplayHashRerenderComponentBase and one line to override the DisplayHash getter. So, such a ProductTile.razor will look something like:

@using Sz.BlazorRerenderReducers
@inherits DisplayHashRerenderComponentBase
<li class="product-tile">
    <ProductImage Product="@Product" />
    <div>
        @if (!Product.IsInStock)
        {
            <OutOfStockBadge />
        }
        <ProductCode Product="@Product" />
        <ProductPriceTag Product="@Product" />
    </div>
</li>
@code {
    [Parameter, EditorRequired]
    public ProductViewModel Product { get; set; } = null!;

    // when this value is unchanged, supresses unnecessary re-rendering
    protected override string? GetDisplayHash() =>
        $"{Product.Code};{Product.Price};{Product.IsInStock}";

    // write a console line on every render
    protected override void OnAfterRender(bool _) =>
        Console.WriteLine($"Rendered {GetType()}");
}

Here, the DisplayHash get-only property can return anything that is uniquely determined by the data that affects what could be rendered by the component. True to its name, it is a hash for the display of the component. In this example, I’ve just shown a simple concatenation of the values that influence the display, but you could have any function here you’d like – perhaps using hash functions implemented on the parameters themselves, or serializing them using System.Text.Json, but I have found often a interpolated string is simplest and easiest to understand. The base component DisplayHashRerenderComponentBase will then properly store the previous and current values of this hash, and ShouldRender will return true or false appropriately, removing the need to reimplement such custom ShouldRender code in each component.

Drawbacks

Of course, one drawback to this approach is that it requires that all such components must inherit from DisplayHashRerenderComponentBase, thus removing the possibility of inheriting from any other base component, abstract or otherwise (unless of course you can in turn make that base component inherit from DisplayHashRerenderComponentBase). Fortunately, I have not seen much need to inherit a component from anything other than ComponentBase (which DisplayHashRerenderComponentBase itself inherits from), so this has not yet been an issue for me.

If you’ve followed along to this point, perhaps you can imagine the bigger drawback to this approach: it requires that the developer specify every potential source of changes to the display state. In effect, you have to “register” (by including in the DisplayHash getter logic) every property that might affect the GUI in each component, including its children components (after all. if a parent component avoids re-rendering, there’s nothing to cause its children to re-render either). As you can imagine, this can lead to confusion as to why a component isn’t re-rendering when it needs to–sometimes parameter properties are used but forgotten to be added to the DisplayHash. No error results and no cue is possible to tell the developer that they forgot to tweak the DisplayHash because of a new binding or some other new code indirectly affecting a bound reference.

However, even this latter drawback seems lower-risk than the frequent errors and confusion that results from manually making every component have a ShouldRender method that would itself need to take into account every relevant property and also would need to track the current and old values of each such property in order to determine if a render is necessary. So this technique seems to be a distinct improvement over having such nuanced and perhaps repetitive (though custom) ShouldRender overrides in many components.

Demo

So try it now on this demo page. To the code from above, I’ve added a toggle checkbox at the top to enable and disable the technique. We can open the console an d add an item, you’ll see a console line was written on each ProductTile render. With GetDisplayHash implemented, we’ve reduced the number of ProductTile renders from dozens to one–but don’t forget the child components that are unchanged: we’ve reduced the total number of components that render from several hundred to five. In my experience, this often cuts the “lag” after the user clicks the button from a few seconds to imperceptibly fast. On apps where users do lots of navigation or small manipulations, lag of a few seconds can kill your perceived performance, so this is a big win.

So if you have (or suspect you have) hundreds or thousands of unnecessary renders happening on many changes, give Sz.BlazorRerenderReducers a try to help drastically reduce the re-rendering that is needed. You can read more on the BlazorRerenderReducers project GitHub page and leave an issue with any concerns or questions, or just reply to this post.