Possibly stupid question: is automated testing actually a common practice?
Referring more to smaller places like my own - few hundred employees with ~20 person IT team (~10 developers).
I read enough about testing that it seems industry standard. But whenever I talk to coworkers and my EM, it's generally, "That would be nice, but it's not practical for our size and the business would allow us to slow down for that." We have ~5 manual testers, so things aren't considered "untested", but issues still frequently slip through. It's insurance software so at least bugs aren't killing people, but our quality still freaks me out a bit.
I try to write automated tests for my own code, since it seems valuable, but I avoid it whenever it's not straightforward. I've read books on testing, but they generally feel like either toy examples or far more effort than my company would be willing to spend. Over time I'm wondering if I'm just overly idealistic, and automated testing is more of a FAANG / bigger company thing.
It's not that hard to setup GitHub or GitLab to make sure all the unit tests run for each PR.
If you use something else for version control, check if they offer a similar CI feature. If not, setup Jenkins.
I'm an SRE at a big tech company, so part of my job is to make sure CI infrastructure is readily available to our Dev partners. But I've worked at smaller companies before (10 or less SWEs) and even they had a Jenkins instance.
This is a bright red flag to me. If I worked for a company that didn't have CI, the first thing I would do is set it up. If I wasn't allowed to take the time required to do that, I would quit...
We do have CI (Azure DevOps), we aren't that insane. Though to be fair, it's relatively recent. The legacy app has a build pipeline but no tests. We got automated deployments to lower environments set up about a year back.
My main project has build pipelines as well, Spring Boot "microservices" (probably a red flag given our size and infrastructure) with code coverage around 40-60% mostly unit tests. But I'm the only dev that really writes tests these days. No deployment pipelines there though as the SysAdmin is against it (and only really let us do the legacy app reluctantly).
Ok. So if you have the infra already, it's really just a matter of actually writing the tests. That can be done incrementally.
40%-60% unit test coverage is honestly not too bad. But if the company's bottom line rests on this code, you probably want to get that up. 100% though isn't really worth it for application code, but it is definitely worth it for library code.
One thing where I work is that all commits must be reviewed before being merged. A great way to improve coverage is to be that guy when people send you PRs.
Automated testing is often more cost effective than manual testing. Not to say 100% automated testing is a reasonable goal. But I’ve never worked anywhere without some automated testing (unit, integration or end-to-end).
I'm on a similarly sized team, and we have put more effort into automated testing lately. We got an experienced person in on the team that knows his shit, and is engaged in improving our testing. But it's definiely worth it. Manual testing tests the code now, automated testing checks the code later. That's very important, because when 5 people test things, they aren't going to test everything every time as well as all the new stuff. It's too boring.
So yes, you really REALLY should have automated testing. If you have 20 people, I'd guess you're developing on something that is too large for a single person to have in-depth knowldge of all parts.
Any team should have automated test. More specifically, you should have/write tests that test "business functionality", not that your function does exactly what it is supposed to do. Our test expert made a test for something that said "ThisCompentsDisplayValueShouldBeZeroWhenUndefined" (Here component is something the users see, and always exepct to have a value. There is other components that might not show a value).
And when I had to interact with the data processing because another "component" did not show zero in an edge case. I fixed the edge case, but I also broke the test for that other component. Now it was very clear to me that I also broke something that worked. A manual tester would maybe have noticed, but these were seperate components, and they might still see 0 on the thing that broke becase they had the value 0. Or simply did not know that was a requirement!
We just recently started enforcing unit tests to be green to merge features. It brings a lot more comfort, especially since you can put more trusting changing systems that deal with caluclations, when you know tests check that results are unchanged.
Was there any event that prompted more investment into testing? I feel like something catastrophic would need to happen before anyone would consider serious testing investment. In the past (before I joined) there were apparently people who tried to get Selenium suites but nothing ever stuck.
I think nobody sees value in improving something that is more or less "good enough" for so long. In our legacy software, most major development is copy+paste and change things, which I guess reduces the chance of regressions (at the cost of making big changes much, much slower). I think we have close to 100 4k line java files copied from the same original, plus another 20-30 scripts and configs for each...
We are doing a "microservices rewrite" that interfaces with the legacy app (which feels like a death march project by now), and I think it inherited much of the testing difficulties of the old system, in part due to my inexperience when we started. Less code duplication, but now lots of enormous JSONs being thrown all over the network.
I agree that manual testing is not enough, but I can't seem to get much agreement. I think I do get value when I write unit tests, but I feel like I can't point to concrete value because there's not an obvious metric I'm gaining. I like that when I test code, I know that nobody will revert or break that area (unless they remove the tests, I suppose), but our coverage is low enough that I don't trust them to mean the system actually works.
Our main motivator was, and is, that manual testing is very time consuming and uninteresting for devs. Spending upwards of a week before a release because the teams has to setup, pick and perform all featue tests again on the release candidate, is both time and money. And we saw things slip through now and then.
Our application is time critical, legacy code of about 30 years, spread between C# and database code, running in different variations with different requirements. So a single page may show differently depending on where it's running. Changing one thing can often affect others, so for us it is sometimes very tiresome to verify even the smalles changes since they may affect different variants. Since there is no automated tests, especially for GUI (which we also do not unit test much, because that is complicated and prone to breaking), we have to not only test changes, but often check for regression by comparing by hand to the old version.
We have a complicated system with a few intergrations, setting up all test scenarios not only takes time during testing, but also time for the dev to prepare the instructions for. And I mentioned calculations, going through all motions to verify that a calculated result is the same between two version is a awfully boring experience when that is exaclty something automated tests can just completely take over for you.
As our application is projected to grow, so does all of this manual testing required for a single change. So putting all that effort into manual testing and preparation can intsead often just be put on making tests that check requirements. And once our coverage is good enough, we can only manuall test interfaces, and leave a lot of the complicated edge cases and calculcation tests to automated tests. It's a bit idealistic to say automated tests can do everything, but they can certainly remove the most boring parts.
automate everything is the standard practice. You can't get a pull request in my company without automated code review including unit tests and selenium style practical tests plus two human reviewers.
I've never worked (recently) at a shop that didn't do some level of automated testing. In terms of having a bunch of people working on a big codebase without stuff being randomly broken most of the time, I'd say it's an absolute requirement to do it to at least some passable level.
In my experience it's, if anything, sometimes the opposite way -- like they insist on having testing even when the value of it the way it's being implemented is a little debatable. But yes I think it's important enough in terms of keeping things productive and detecting when something is totally-broken that you need to.
(Especially now when you can literally just paste a module into GPT and ask it to generate some sorta-stupid-but-maybe-good-enough test cases for it and with minimal tweaking you can get the whole thing in in like 10 minutes.)
like they insist on having testing even when the value of it the way it’s being implemented is a little debatable
I started to feel like I was this guy when I asked someone to test their code after multiple sprints of being sent back from QA. Good to hear I'm not the crazy one, I guess.
Very common. Your coworkers are either idiots, or more likely they're just being lazy, can't be bothered to set it up and are coming up with excuses.
The one exception I will allow is for GUI programs. It's extremely difficult to do automatically tests for them, and in my experience it's such a pain that manual testing is often less annoying. For example VSCode has no automatic UI tests as far as I know.
That will probably change once AI-based GUI testing becomes common but it isn't yet.
For anything else, you should 100% have automated tests running in CI and if you don't you are doing it wrong.
Leadership may be idiots, but devs are mostly just burnt out and recognized that quality isn't a very high priority and know not to take too much pride in the product. I think it's my own problem that I have a hard time separating my pride from my work.
Thanks for the response. It's good to know that my experience here isn't super common.
Our standard practice is to introduce a thin layer in front of any I/O code, so that we can mock/simulate that part in tests.
So, if your database-library has an insert()-function, you'd introduce a interface/trait with an insert()-function, which's default implementation just calls that database-library and nothing else. And then in the test, you stick your assertions behind that trait.
So, we don't actually test the interaction with outside systems most of the time, because well:
that database-library is tested,
the compiler ensures we're calling that library correctly (assuming no use of a scripting language), and
it's often easier to simulate the behavior of the outside system correctly, than to set it up for each test case.
We do usually aim to get integration tests with all outside systems going, too, to ensure that we're not completely off the mark with the behavior that we're simulating, but those are then often reduced to just the happy flow.
My team follows test driven development, so I write a test before writing the feature that the test, well, tests.
This leads to cleaner code in general because it tends to be the case that easy to test code is also easy to read.
On top of this fact, the test suite acts as a sort of "contract" for the code behaviour. If I tweak the code and a test no longer works, then my code is doing something fundamentally different. This "contract" ensures that changes to one codebase aren't going to break downstream applications, and makes us very aware of when we are making breaking changes so we can inform downstream teams.
Writing tests and having them running at PR time(or, before its deployed to production, if you're not using some sort of VCS and CI/CD) should absolutely be a part of your dev cycle. Its better for everyone involved!
Yeah, debugging tests is an important part of test driven development.
You also have to be careful. Some tests are for me to debug my code and aren't part of the 'contract'.
But on the other hand, it's really nice. If I spend a couple of hours debugging actual code and come out of the process with internal tests, the next time it breaks, the new tests make it much easier to identify what broke. Previously, that would have been almost wasted effort, you fix it and just hope it never breaks again.
Yeah, but it isn't usually very difficult to write a test correctly, unit tests especially.
If you can't write a test to validate the behaviour that you know your application needs to exhibit, then you probably can't write the code to create that behaviour in the first place. Or, in a less binary sense, if you would write a test which isn't "right", you're probably just as likely to have written code that isn't "right".
At least in the case with the test, you write the test and the code, and when the test fails (or, doesn't fail when it should) you're tipped off to something being funky.
I'm sure you could end up writing a test that's bad in just the right way to end up doing more harm than good, but I do think that's the exception(heh).
You should think of an automated test as a specification. If you've got the wrong requirements or simply make a mistake while formulating it, then yeah, it can't protect you from that.
But you'd likely make a similar or worse mistake, if you implemented the production code without a specification.
The advantage of automated tests compared to a specification document, is that you get continuous checks that your production code matches its specification. So, at least it can't be wrong in that sense.
Sure, but testing usually purely relies whether your assumptions are right or not - whether you do it automatically or manually.
Like if you're manually testing a login form for example, and you assume that you've filled in the correct credentials, but you didn't and the form still lets you continue, you've failed the testing because your assumption is wrong.
Like even if the specs are wrong, and you make a test for it, lets say in a calculator Assert(Calculate(2+2).Should().Equal(5) - if this is your assumption based on the specs or something, you can start up the calculator, manually click through the UI of the calculator, code something that returns 5, and deliver it.
Then once someone corrects you, you have to start the whole thing over, open up the calculator, click through the UI, do the input, now it's 4, yay!
If you had just written a test - even relying on a spec that was wrong, it's still very easy to change the test and fix the assumption.
Also, lets say next sprint you'll have to build a deduct function in the calculator, which broke the + operation. Now you have to re-test all operations manually to check you didn't break anything else. If there were unittests with like 100 different operations, you just run them all, see they're all still good, and you're done
Here’s my random collection of thoughts on the subject.
I have no idea how common it is in general. Seems like some devs build tests while others don’t. This varies plenty on a team level as well as organization wide. I’ve observed this at small to very large companies, though not FAANG where I generally hope and expect that tests are a stronger standard.
I will say that test are consistently and heavily used in every large, open source project that I’ve reviewed. At some point, I think quality test cases become a requirement.
Here’s the big thing. Building automated tests is almost always a wise investment, regardless of the size of the org. Manual testing is dramatically more expensive and less effective than running unit and integration tests. I’ve never written unit tests and not found issues.
More importantly, writing unit tests forces you to write code that can be tested. This is important. IMO, code that can be tested is 1) structured differently and 2) almost always better.
Unit tests protect you from your own mistakes. Frequently. Integration tests protect you from other people. E.g when your code depends on an api and that api unexpectedly introduces a breaking change.
Everybody likes having quality tests. Nobody likes writing tests.
Quality tests are basically a strict requirement for fully automating ci/cd to production. Sure, you can skip tests and automate prior deploys, but I certainly don’t recommend it. I would expect people to be fired for doing this.
Chasing 100% test coverage is a fools game. Think about your code, what matters, and what doesn’t. Test the parts that add value and skip the rest. This is highly related to how writing unit tests change your code.
Building front end tests is inherently hard. It’s practically impossible to fully test front end code. Not even close.
Personally, I like the idea of skipping tests when you’re building a POC. Before the POC is done, you may not know if your solution is viable or what needs to be tested. The POC helps you understand. Builds tests for MVP and further iterations.
Quality ci/cd tests are complimented by quality observability, which is a large and independent topic.
This is more or less the thoughts I typically hear online, and all makes sense. What I tend to notice interviewing people from big(ger) companies than mine (mostly banks), it sounds like testing for them is mostly about hitting some minimum coverage number on the CI/CD. Probably still has big benefits but it doesn't seem super thoughtful? Or is testing just so important that even testing on autopilot has decent value?
I get that same feeling with frontend testing. Unit testing makes sense to me. Integration testing makes sense but I find it hard to do in the time I have. But frontend testing is very daunting. Now I will only test our data models we keep in the frontend, if I test anything frontend.
Test coverage is useful to measure simply because it’s a metric. You can set standards. You can codify the number into ci/cd. You can observe if the number goes up or down over time. You can argue if these things are valuable but quantifying test coverage just makes it simpler or possible to discuss testing. As people discuss test coverage and building tests becomes normalized, the topic becomes boring. You’ll only get thoughtful discussions on automated testing when somebody establishes a new method, pattern, etc. After that, most tests are very simple. That’s often the point.
Even “testing on autopilot” has high value.
You can build lots of useful front end tests. There are tools for it. But it’s just not possible to test everything because you can’t codify every requirement. E.g. ensure that this ui element is 5 pixels below some other element, except when the window shrinks, and …
I haven’t seen great front end tests. But the ones I’ve seen mostly focus on functionality and flow rather than aiming to cover all possible scenarios. Unit tests are different in this regard.
Integration testing makes sense but I find it hard to do in the time I have.
This is a red flag. Building tests should be a planned part of your work, usually described as acceptance criteria. If you need 4 hours to write a code change, then plan for 8 or whatever so you can build tests. Engineering leaders should encourage this. If they don’t, I would consider that a cultural problem. One that indicates a lack of focus on quality and all of the problems that follow.
Edit: I want to soften my “red flag” comment. That’s a red flag for me. That job isn’t necessary bad. But I would personally not be interested. It’s ok to accept things like, “we don’t write tests and sometimes we deal with issues”. Especially if it’s a good job otherwise.
I'm in a team of 4 developers and we demand automated testing. Ok that's part of a slightly bigger development team but even our QC team have automated tests that they run for integration testing.
And please for the love of all that is holy.. DO NOT let some smuck go through setting up a test platform with all defaults and cause thousands of notifications per minute without any plan on actually addressing notifications (by either actually fixing the issue or tweaking thresholds for your specific situation.)
Else all the system does is train people to become apathetic to notifications and warnings. I once was on an IT Team that had read notification boards and 104k notifications.. and all they did was joke about it.. seriously.. turn off the system then.
Sometimes you'd use defensive programming (type checker, exception handling, null safeguards, fallback/optional values) which can be argued as a sort of in-place testing, so testing can be not as beneficial to your projects' robustness as the readability of their core business logic. And some languages would lean more heavily towards defensive programming (e.g. Go, Scala or well written Typescript) and some would rely more on tests but also be designed in a way that makes testing really easy as they seek to keep things loosely coupled (Elixir or Clojure).
Also if your language doesn't have a quality REPL to reliably test things manually, there is a relatively high chance you debugging process is causing you to waste more time than having a good test coverage.
I think even in languages that do a lot at compile time (Rust, Haskell, etc.) it's still standard practice to write tests. Maybe not as many tests as e.g. Python or JavaScript or Ruby. But still some.
I work in silicon verification and even where things are fully formally verified we still have some tests. (Generally because the formal verification might have mistakes or omissions, and occasionally there are subtle differences between formal and simulation.)
We started focusing in on automated testing when we had 3 manual QAs (not including me), and since then every new project has started with plans for automated testing.
It's important to note that we don't do automated tests instead of manual testing. Manual testing is still important for focused review of new features/bugs, but automated tests make sure code changes aren't breaking anything elsewhere.
Also this is all about end-to-end tests (with Selenium, in our case). If you're talking about a lack of unit/integration tests within the codebase itself, that's a huge red flag. Even if quality issues aren't the end of the world, they will definitely make people reconsider using your product. Who wants to trust their financial information with unstable software? It's also making your QA team less efficient since they're having to chase down issues that would be better recognized by the dev who wrote them.
Automated tests are pretty common, yes. It's not strictly speaking a matter of company size, but moreso company technical maturity.
Automated tests do not slow your business down, it is in fact the only way to not get slowed down as the amount of code you maintain increases.
The alternative cost of not having tests catch issues before they reach production is very significant - an error caught by an automated test costs nothing, while an error that makes it into production can cause immense harm to the business, if only for the time necessary to remediate the issue, which is time that could have been spent on actually making progress on delivering new features.
Not to mention the high cost of having to employ increasing amounts of manual testers just to keep the worst of issues from slipping through.
All in all, not having automated tests in place is a significant mistake from a business perspective. You might want to have a frank discussion with your CTO about it.
My context: I'm in a small ~30 software company. We do various projects for various customers. We're close to the machine sector, although my team is not. I'm lead in a small 3-person developer team/continuous project.
I write unit tests when I want to verify things. When I'm in somewhat low, algorithmic, coding behavior, interfacing areas.
I would write more and against our interfaces if those were exposed to someone or something. If it needs that stability and verification.
Our tests are mainly manually (mostly user-/UI-/use-interface-centric), and we have data restrictions and automated reporting data consistency validations. (Our project is very data-centric.)
it’s not practical for our size and the business would allow us to slow down for that
Tests are an investment. A slowdown in implementing tests will increase maintainability and stability down the line. Which could already be before delivering (reviews, before merge or before delivery issues being noticed).
It may very well be that they wouldn't even slow you down, because they could lead you to a more thought out implementation and interfacing. Or noticing issues before they hit review, test, or production.
If you have a project that will be maintained then it's not a question of slowing down but of are you willing to pay more (effort, complexity, money, instability, consequential dissatisfaction) down the line for possibly earlier deliverables?
If tests would make sense and you don't implement them then it's technical debt you are incurring. It's not sound development or engineering practice. Which should require a conscious decision about that fact, and awareness on the cost of not adding tests.
How common automated testing is - I don't know. I think many developers will take shortcuts when they can. Many are not thorough in that way. And give in to short-sighted time pressure and fallacy.
Perhaps it's just part of being somewhere where tech is seen as a cost center? Technical leadership loves to talk big about how we need to invest in our software and make it more scalable for future growth. But when push comes to shove, they simply say yes to nearly every business request, tell us to fix things later, and we end up making things less scalable and harder to test.
It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head. I guess I've just been gaslit by my EM into thinking this lack of testing is a common occurrence.
(A programming lemmy may not be a terribly representative sample, but I don't see anyone here anywhere close to as wild west as my place.)
It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head.
The way you suffer for it, is in a loss of agility.
When I'm in a project with excellent unit test coverage, I often have no qualms with typing up a hot fix, running it through our automated tests and then rolling it out, in less than an hour.
Obviously, if it's a critical target system, you might want to get someone's review anyways, but you don't have to wait multiple days for your manual testers to get around to it.
Another way in which it reduces agility is in terms of moving people between projects.
If all the intended behavior is specified in automated tests, then the intern or someone, who just got added to the project, can go ham on your codebase without much worry that they'll break something.
And if someone needs to be pulled out from your project, then they don't leave a massive hole, where only they knew the intended behavior for certain parts of the code.
Your management wants this, they just don't yet understand why.
I worked at 8 different companies as a contractor, so hopefully my sample size is big enough to be meaningful. I'd say it's 50-50. The companies that don't, usually know that they should but they need a little help. Companies that don't do it and they think they don't need it, are becoming more and more rare (fortunately).
Stick with it. If you're a junior, don't go evangelizing automated testing because it will fall on deaf ears until you're a little more experienced. Keep practicing and offer to set things up if they haven't already.
yes, it's very common in my region. 50% of companies I worked at had CI servers that ran unit tests round the clock. the companies are only slightly bigger than yours. also i know multiple companies my company worked with also have CI setups.
some even auto deploy to prod when the tests in master passed okay.
most use hudson or jenkins for CI with junit, phpunit, selenium and/or cypress for testing.
I wish. Most companies i've worked at i was maintaining monolithic legacy code that's hard to test properly. Sometimes another team was developing the next best thing under management guidance (so it would become the next monolithic legacy code) but usually no.
I've only worked at one company that did TDD and things were smooth.
As usual, management only sees short-term and it's hard to impress on them that any time lost now by implementing proper testing will be gained in the long run.
another team was developing the next best thing under management guidance (so it would become the next monolithic legacy code)
Pretty much what my team is doing. No need to spend time improving the old system when this one will replace it so soon, right? (And no, we will not actually replace anything anytime soon.)
Yes, it's pretty standard, although how valuable it is depends on a lot of factors. You can write a lot of useless tests just to get the expected "coverage". Also management will never see value in that type of work even after things break in production.
At my company I believe we are shifting over a threshold of technical complexity where before, manual testing was a better value proposition because it didn't take very long and adding tasks was a slow down. We're reaching a point where there are cases that are annoying to test every time, manual tests that would be convoluted to put together every time, etc, and automated testing that does all of this with 0 effort seems worth the task addition.
And once there is a practical day to day methodology for writing and running tests that everyone is already using adding them for the "easy" manual tests gets easier too, since it will get run with the same command as all the others anyway and if it's an easy manual test it's not much code anyway if you've already got whatever tools you need for the more convoluted cases.