QA Challenge Accepted 4.0 Highlights
QA Challenge Accepted 4.0 conference was held on 21th of April 2018 in Sofia at International Expo Center. I witnessed overall good organization and interesting lectures. All slides were in English, while most of the presentations were in Bulgarian and some in English. This could be a little bit confusing if your English is not very good. Here are the highlights from the presentations:
Misconceptions About Software Testing
The first speaker was Claudiu Draghia from Romania. He presented in English.
Misconception is incorrect opinion based on faulty understanding. We are all biased by our environment and team. We have different perspective. Here are some of the misconceptions, followed by reasons why they are incorrect:
- Everyone can test so testers should receive less money than devs. Can everybody drive? Everybody can try, but not everybody can be successful at that. There is a misconception that testing is easy. We should communicate better what is testing and what problems we are overcoming.
- Testing can be done under all circumstances. Not everything can be tested. It should be testable. Testability is an important feature of the software that is tested.
- Testing cannot be done without specification. Exploratory testing busts this misconception.
- Understanding requirements means reading them. Try to visualize, draw and model them. Try to understand them piece by piece, one step at a time.
- Testers broke software. It is not true. QAs broke the illusions about the software, it was already broken.
- Testers are senior if they have five plus years experience. You could have made the same things all those years. You are senior if you can succeed in any other team.
- There is not enough time for testing. Reevaluate what has to be tested and reduce scope if your testing time is reduced.
- Tester are bearers of bad news. You should be diplomatic. Try to use “We are not done yet with this project”.
- It is ok and good to test at the end of the sprint. Test one beat at a time. Start early. Otherwise you will feel squeezed and not very effective.
- Testing is what testers do. It is an activity that is performed by all team members in a variety of ways.
- Testing is not a structured activity. Is should be structured in your head.
- Testing finds all bugs. This depends on goals, time, people and resources constraints.
- Production bugs are failures. You have missed bugs and this is normal, if you learn from your mistakes. You should not make same mistake twice.
Deep Oracles: Multiplying The Value Of Automated Tests
Emanuil Slavov was talking about high level automation tests improvements. Usually you use an oracle to determine whether the test has passed or failed.
There could be issues even if the test has passed. Deep oracle finds those issues. Emanuil gave us several tips on how to make our existing automation tests better, so that they can catch more issues. The tests are more flaky if they are higher in the test pyramid. Google have around 1,5 % flaky tests. 1 Percent is considered ok.
Some bug sources:
- Configuration issues (load balancers configuration, connection pool capacity, external URLs not in CDN)
- Application issues like thread unsave code, lack of retries in distributed system, database connections not closed after use.
You should use test data that is random, but looks real. Use different names, countries, credit cards, etc. Different data exercise different code. Pay attention to special symbols in data like quotes, Twitter mentioning and URLs.
Use service virtualization. Testing with real third party services is not possible or is expensive (PayPal, Amazon).
Tests should be able to generate all data that they need. Use attack proxy with not so random data. Attack proxy is HTTP or web service in between test and application. You try to find SQL injection, data disclosure or other security issues. Emanuil showed us how can numbers in post request, API token referrer and JSON fields be attacked.
You should carefully assess whether you will use your own tool or an existing one for security testing. The tool should know a lot about your application in order to be successful. The speaker mentioned a bug when XSS could be executed in email field because there was no character limit.
The next techniques need dedicated test environment (servers or containers).
Usually the test relies on assertion, but code execution may continue after last step logs.
Track exceptions
Look for unexpected exceptions in logs. His team had failed to catch parse exception in their Elastic search logs. Bad data is similar to exception. Bad data could be missing, unrealistic, duplicated, bad format, not synchronized or conflicting information.
It depends on context, the value zero is valid for SQL, but in PHP if zero is used in if statement, it means false. Due to this bug, the team saw how database arithmetic progression killed three times their production server.
According to speaker`s experience, 19 % of the exceptions were caused by bad data.
The last described technique was monitoring application metrics. Record application stats after each test run. With fast test you can tie performance bottlenecks to commits. Look for increase in application log files lines/exceptions after each commit. Some other metrics that can indicate an issue are total database queries count. With parallel execution of 12-16 threads you can catch deadlock in logs.
The speaker mentioned some tools that the team has used as attack proxy:
Acunetix, Netsparker, ZAP, and Burp proxy with headless execution. Burp is a command line Java tool and they now use it with slight modification.
Emanuil`s blog is emanuilslavov.com.
How To Make An Agile Team’s Life Easier With Automation At Every Level – Testing Of Microservices
Stanislava Stoykova and Atanas Georgiev talked about how they inherited a large KPMG project. Multiple teams had to be able to work on the project with rapidly changing technology. They solved these challenges by using microservices. Microservices are small applications, working independently and using own data.
They use Azure Service Phabric, ASP.Net Core, Angular, EF Core and NServiceBus. The speakers showed us the registration microservice SPA.
In their team, all team members test. The QAs know what test scenarios are covered by unit tests. The team has API tests for frontend and backend communication.
They started to test with Postman and Newman. Then they found AspNetCore.TestHost. Its main advantage is that not real DB is needed, it uses in-memory database. Their API tests run for seconds after each commit.
The micro services approach is expensive, it is suitable for big projects.
Performing Performance Testing – Why, Who, How – Step By Step
The speaker was Nedko Hristov and his blog is nedko.info.
Before discovering performance test tools, he has done performance testing manually, with 6 other colleagues, connected to a local machine.
Performance is an on-going process starting from planning. It is a mistake to test in the end of the project. If we found a performance bottleneck at the end of the project, it could mean deep architectural changes and code rewriting. Unfortunately, often it is underrated as the project has short estimates and/or no requirement for performance.
You should know what metrics you need. Measure before and after changes.
Nedko showed us some screens/recorded demo from JMeter, Graphana and InfluxDB. The performance test results are written to the database during test execution.
Graphana dashboard has alerts so that you don’t break the whole infrastructure.
Automating Video And Audio In Video Management System
The speaker was Marta Magiera from Copenhagen. The automation of videos is not an easy task. You should take into account different hardware processors, different video codecs, different video and screen resolution, different hardware acceleration and several other factors.
Audio automation is also challenging because of different codecs, different frequencies (1000HZ, 3000HZ, 6000HZ) and different modes.
Performance is extremely important. The team went from no automation to automation.
They use Ranorex for UI tests, Jenkins CI and NUnit. The team was really dependent on hardware as you cannot run hardware acceleration on virtual machines. The challenge was to have stable FPS driver as not all cameras support all codecs. They use fake driver. They created test videos because you cannot compare two different videos. Performance depends on the amount of movement in video.
The team run all tests nightly, they are executed 12 hours. The team has 200 different tests that test frequency sample integrity.
For their audio tests they use VB-Audio virtual cable, Audacity and Spek to measure the quality of sound. NAudio is open source .Net library that helps them with automation.
Visual Test Automation In Adidas – Practical Usage
The speakers were Nikolay Stanoev and Georgi Yordanov. The challenge the team faced, was to test 45 custom sites on 30 different languages. Visual testing is even more important for their customer than the functional testing. They release daily. German language translations often break the layout because of longer words. The web sites have 50 payment providers integrated. Visual testing in a broad terms is to compare two images.
They descreased by 30 percent the manual visual testing and had 0 critical and o major bugs in production since they started using Applitools.
They are several ways to do visual comparison in Applitools:
- Exact pixel perfect
- Strict pixel perfect (has color tolerance)
- Content
- Layout
Tips
- Use layout for full page or combination of strict and layout, use ignored regions for changing content.
- Use user journeys for visual testing
- Reuse or convert functional tests
- Limit test execution to 10-15 minutes
Bugs found
In the first two months the tool found 53 issues. Some of the bugs were:
- Missing or new content
- UI changes that broke the layout because of libraries update
- Translation issues
Benefits
- Currently the maintenance cost per test is 0-1 minute.
- With zero additional code they have fast results and increased confidence that they can release.
- The maintenance is easy, the learning curve is short, a newbie can start to use the tool in 1-2 hours.
- They delegate part of the test results checks to designer/other colleagues and saved time from not necessary communication.
They make screenshots on real mobile devices. Applitools is a paid visual testing solution.
This topic was similar to the one presented by Nikolay Stanoev on ISTA conference 2017.
Cypress Vs. Selenium
The speaker was Lyudmil Latinov and he made a very good comparison of the tools.
Selenium
- Selenium architecture supports bindings for many different program languages. The driver-browser communication resembles the client-server architecture and is done via JSON.
- Selenium is a library and requires additional setup.
- Used mainly by QAs.
- Has support for all browsers.
- Relatively slow tests.
- Wait for element is not very stable.
- Supports parallel and remote execution.
- Has great community.
Cypress
- Cypress runs directly in browser, it has direct application access.
- Developed for QAs and devs.
- Supports travel back in time snapshot before every step.
- Cypress is complete framework, contains Mocha, Chai and Sinon.
- Cypress can be used for UI and integration test.
- Has only JavaScript binding.
- Only Chrome support.
- The team is working on Firefox support. Edge is in the roadmap for development.
- Fast tests, almost no delay.
- No need for waits.
- No parallel execution, no remote execution.
- Cypress can record video.
- Excellent documentation, not developed community.
- Cypress cannot switch tabs.
- Cypress can control the network traffic in browser. It can skip security limitations like CORS.
Both Selenium and Cypress can load extensions and manipulate cookies. Both are open source.
Although Cypress is not as mature as Selenium, the speaker prefers writing tests with Cypress.
Automation Vs. Intelligence – “Come With Me, If You Want To Live”
The speaker was Viktor Slavchev.
Why are we so special that we cannot be replaced? Testers could not be replaced by unit tests, by methodologies, by automation or by artificial intelligence.
The speculation that humans will be replaced by AI are based on movies. We are not doing simple executable and repeatable steps. The testing has social nature. Quality and risks are social concepts. Just like money.
AI is good for facts and rules, partially also for couching. AI is not good at creative tasks.
Team Motivation 3.0: Burying The Carrot And Stick
The speaker was Aneta Petkova. Motivation has its history.
Version 1.0 of motivation was the motivation to survive physically. Motivation 2.0 is the motivation to thrive with society. If you receive a reward you will do more of what you did. If you get punished you stop doing what you did.
There are elements that are defying the matrix, like open source software (Selenium, Linux, JMeter). They are very successful and millions of people use them every day. But their developers and maintainers receive no money.
The speaker told us about the so called candle experiment. There was reward and no reward group. The task was to use objects in a non conventional way. Reward group needed longer time to solve the task, because part of their brain was occupied by thinking of the reward.
The next was the kindergarten experiment. There was fine for any late pick up in a kindergarten. The parents started to come late more than before the fine was introduced. They thought that they had the right to come late as they have paid for the late pick up.
Money is not always the answer. It is the same as drugs. You should increase the dose regularly so that it has effect.
Motivation 3.0 is to drive enjoyment or accomplishment. This is the intrinsic motivation, connected with autonomy, mastery and purpose. Being a team starts with building trust. Be on time and get the work done. Communicate what you want to accomplish. People are equal but not same. You need to recognize strengths and weaknesses. Give choice and do not project your opinion on others.
You manage things, you lead people.
Lightning talks
Anton Angelov mentioned the Meissarunner.com tool. He will present it as Selenium conference speaker in India this year.
Vasil Tabakov message was to spread the quality into the society.
Claudiu Draghia talk was about how we should learn from our mistakes. We all are making mistakes as humans and testers. We are afraid to make mistakes. But this was not the case when we were younger. Learn from mistake and do not repeat it. Nobody is perfect. Prepare to make another mistakes.
Summary
This article includes the summary from the QA Challenge Accepted 4.0 conference presentations. If you want to learn more details, you can check this YouTube channel. There are videos from the previous years so my expectation is that the organizers will publish soon the videos from this year.