Sharing your work as a software engineer inspires others, invites feedback, and fosters personal growth, Suhail Patel said at QCon London. Normalizing and owning incidents builds trust, and it supports understanding the complexities. AI enables automation but needs proper guidance, context, and security guardrails.
Suhail Patel presented Shine Bright as an IC: Growing Yourself as Your Company Grows where he spoke about growing oneself as a software engineer.
It’s important to present your work to provide inspiration to others and gather perspectives, Patel said. Sometimes you do open yourself up for challenge, but that can also be a growth opportunity to learn about any assumptions you might have made that don’t quite hold true.
Showcasing your work can be through writing and blog posts, and also by giving technical talks and presentations, as Patel explained:
Often I hear people saying their work is not novel or groundbreaking. I argue most of our work in the industry is derivative. Being able to form ideas together, whether they are original or inspired by others with credit, is key.
Normalizing incidents matter, Patel mentioned. Nothing goes perfectly in projects and bets that engineers take, especially as these projects get large in scope and complexity, he said. The reaction and response to incidents is super critical:
I’d argue it’s make or break in terms of trust building.
Being vocal about what went wrong and how you intend to fix it and under what time frame can be massively helpful, even for the smallest of issues, Patel said. It’s all about owning the problem. A good problem resolution can turn into a positive advertisement for the broader context of work. As leaders, we also need to install this mentality into fellow team members, he said.
Patel also spoke about the challenges with AI in software development. Every day, we see statistics and claims that some huge percentage of code in some organisations is being written with AI. AI-based tools allow people outside of the software engineering discipline to put their own little automations together in a powerful manner.
Similar to how we coach and mentor new engineers into the field, we need to make that investment into our prompts to teach them about organisational coding patterns and libraries and tools, and context, Patel said. Many people are dismissing the use of AI because it doesn’t write perfect code or do the right thing on round one, with very little context, he added.
We had entire talks at QCon London where the theme was not leaving the door open to your critical information, Patel said. Having the right security and operational guardrails means engineers can harness the power of LLMs without being slowed down, he concluded.
InfoQ interviewed Suhail Patel about showcasing their work and using AI in software development.
InfoQ: How can software engineers showcase their work?
Suhail Patel: You can get started by sharing your work internally. A common pattern I like to use for internal documents is I explicitly call out whether something is in draft and whether you are inviting feedback or just sharing it for outward context and visibility that you are thinking of a problem.
For internal presentations, it doesn’t need to be polished or curated unless you are presenting to an unfamiliar audience. Some of the best internal talks I’ve seen have been the unconference style where there’s a 15-20 min context setting and then Q&A discussion or working through in a group.
InfoQ: How do you deal with the challenges with AI in software development?
Patel: My priority is making sure that our capabilities and platform are enablers for others in the company to realise their vision. We build interfaces and APIs that have access to scoped and minimal bits of data for LLMs, and continue to be very deliberate in code review.
A key bit of investment over the years at Monzo has been static analysis tooling for common bugs and vulnerabilities, and I think that investment is continuing to pay off in spades as these are part of mandatory checks.
Our goal is to develop software (with or without AI involved) safely with all the guardrails up.
