I recently spent about two weeks trying to figure out why an intercontinental connection between two of our sites was broken. Not really my job, I just care about application level, but the network guys were beyond useless.
In the end I had the problem isolated to a specific network segment in India, which made them look at the right system and fix things. The reason? "We put up a firewall the day your problems started which blocks everything, if we allow your connection it works".
I actually had to refactor configuration module some time ago. These really came in handy. But was it worth it? Well... it saved some time, the time that could be used to debug problems manually, but it made me a lot more confident that the functionality that worked before, worked after.
The ones I have seen so far are probably written by the same people who don't understand the usefulness of comments, I reckon. And maintained with the same enthusiasm.
The one I hate? Your unit tests pass when run locally, and in your sandbox environment, and in dev, and in UAT, but prod? Fuck that, failing with reckless abandon.
After many years (10+), I finally find a company that actually, really, implements CI/CD. Then I look at the tests and it's actually the most inane shit imaginable, tacked on top of ancient existing code, not maintained. I spent more time fixing the stupid tests than actually fixing the bugs I was tasked fixing.
Amazing.
We test the shit out of our Apis. We do more API level/integration testing though.
I.e. a test will be something like "if the db is in this state, and we hit this endpoint with these params, does it return what we expect and update the db correctly".
Our app is primarily about users maintaining stuff on big datasets with complicated aggregation and approval logic. So setting up a scenario and checking the app does what the business logic says it will do is what we want to know.
It makes refactoring wayyyyy less painful to just know that the app will always behave itself. Rather than testing whether a function can add 1 + 2 correctly, we can test each endpoint does what it's supposed to do.
It gives us loads of confidence that the backend is doing what it's supposed to. If you do a huge refactor you don't need to worry about whether you broke the test or if the test is failing correctly. If the tests all pass everything is working as it should.
Downside is longer test execution times (because a temporary db needs set up) when running the full suite. Worth the trade off for us though.
I can't really imagine working on any code base that has to actually be maintained and doesn't have tests. The amount of times that tests have safed my ass at my job are uncountable
And it's number 1 priority for management to employ as few developers as possible and stretch their team as thinly as possible. Hence still no unit tests in any of the companies I've worked at recently, despite everyone knowing they're worth it, including lip service from management. They just won't invest in testing, no matter what. One company even fired all the testers then complained to the developers that the product was getting less reliable.
Working at a company with no automated tests. There's not even a collection of regression tests or anything to follow. I was wondering if anyone could share or point me towards a good template to start building out test cases as a first step?
I think this is something you're gonna have to just jump into and start since you don't have anything to work off of. it's going to take a lot of work, but at least you'll be able to work off your own ATFs once you finish. good luck..
I mean, start with trivial cases of the core functionality of what your system does. Then build upon it based on your own findings and what your clients report.
E.g. if your system loads images then put in a tiny 5x5px solid square or checkerboard pattern and see if it loads. Then try putting multiple images, different formats (webp, gif, png, tga) etc. see if that breaks anything, keep building out.
if there are zero automated tests, things probably weren't written with (automated) testing in mind, so there may be a lot of coupling... starting off with integration tests just to validate existing behavior is a good start. hopefully the existing applications aren't also inextricably bound to environments that are run on pet servers and managed by other teams...
It probably really depends on the project, though I'd probably try and start with the tests that are easiest/nicest to write and those which will be most useful. Look for complex logic that is also quite self-contained.
That will probably help to convince others of the value of tests if they aren't onboard already.