Tag: Conferences

Check out the most important updates from QA conferences!

QA Challenge Accepted 4.0 Highlights

QA Challenge Accepted 4.0 Highlights

IdeasQA Challenge Accepted 4.0 conference was held on 21th of April 2018 in Sofia at International Expo Center. I witnessed overall good organization and interesting lectures. All slides were in English, while most of the presentations were in Bulgarian and some in English. This could be a little bit confusing if your English is not very good. Here are the highlights from the presentations:

Misconceptions About Software Testing

The first speaker was Claudiu Draghia from Romania. He presented in English.

Misconception is incorrect opinion based on faulty understanding. We are all biased by our environment and team. We have different perspective. Here are some of the misconceptions, followed by reasons why they are incorrect:

  1. Everyone can test so testers should receive less money than devs. Can everybody drive? Everybody can try, but not everybody can be successful at that. There is a misconception that testing is easy. We should communicate better what is testing and what problems we are overcoming.
  2. Testing can be done under all circumstances. Not everything can be tested. It should be testable. Testability is an important feature of the software that is tested.
  3. Testing cannot be done without specification. Exploratory testing busts this misconception.
  4. Understanding requirements means reading them. Try to visualize, draw and model them. Try to understand them piece by piece, one step at a time.
  5. Testers broke software. It is not true. QAs broke the illusions about the software, it was already broken.
  6. Testers are senior if they have five plus years experience. You could have made the same things all those years. You are senior if you can succeed in any other team.
  7. There is not enough time for testing. Reevaluate what has to be tested and reduce scope if your testing time is reduced.
  8. Tester are bearers of bad news. You should be diplomatic. Try to use “We are not done yet with this project”.
  9. It is ok and good to test at the end of the sprint. Test one beat at a time. Start early. Otherwise you will feel squeezed and not very effective.
  10. Testing is what testers do. It is an activity that is performed by all team members in a variety of ways.
  11. Testing is not a structured activity. Is should be structured in your head.
  12. Testing finds all bugs. This depends on goals, time, people and resources constraints.
  13. Production bugs are failures. You have missed bugs and this is normal, if you learn from your mistakes. You should not make same mistake twice.

Deep Oracles: Multiplying The Value Of Automated Tests

Emanuil Slavov was talking about high level automation tests improvements. Usually you use an oracle to determine whether the test has passed or failed.

There could be issues even if the test has passed. Deep oracle finds those issues. Emanuil gave us several tips on how to make our existing automation tests better, so that they can catch more issues. The tests are more flaky if they are higher in the test pyramid. Google have around 1,5 % flaky tests. 1 Percent is considered ok.

Some bug sources:

  1. Configuration issues (load balancers configuration, connection pool capacity, external URLs not in CDN)
  2. Application issues like thread unsave code, lack of retries in distributed system, database connections not closed after use.

You should use test data that is random, but looks real. Use different names, countries, credit cards, etc. Different data exercise different code. Pay attention to special symbols in data like quotes, Twitter mentioning and URLs.

Use service virtualization. Testing with real third party services is not possible or is expensive (PayPal, Amazon).

Tests should be able to generate all data that they need. Use attack proxy with not so random data. Attack proxy  is HTTP or web service in between test and application. You try to find SQL injection, data disclosure or other security issues. Emanuil showed us how can numbers in post request, API token referrer and JSON fields be attacked.

You should carefully assess whether you will use your own tool or an existing one for security testing. The tool should know a lot about your application in order to be successful. The speaker mentioned a bug when XSS could be executed in email field because there was no character limit.

The next techniques need dedicated test environment (servers or containers).

Usually the test relies on assertion, but code execution may continue after last step logs.

Track exceptions

Look for unexpected exceptions in logs. His team had failed to catch parse exception in their Elastic search logs. Bad data is similar to exception. Bad data could be missing, unrealistic, duplicated, bad format, not synchronized or conflicting information.

It depends on context, the value zero is valid for SQL, but in PHP if zero is used in if statement, it means false. Due to this bug, the team saw how database arithmetic progression killed three times their production server.

According to speaker`s experience, 19 % of the exceptions were caused by bad data.

The last described technique was monitoring application metrics. Record application stats after each test run. With fast test you can tie performance bottlenecks to commits. Look for increase in application log files lines/exceptions after each commit. Some other metrics that can indicate an issue are total database queries count. With parallel execution of 12-16 threads you can catch deadlock in logs.

The speaker mentioned some tools that the team has used as attack proxy:

Acunetix, Netsparker, ZAP, and Burp proxy with headless execution. Burp is a command line Java tool and they now use it with slight modification.

Emanuil`s blog is emanuilslavov.com.

How To Make An Agile Team’s Life Easier With Automation At Every Level – Testing Of Microservices

Stanislava Stoykova and Atanas Georgiev talked about how they inherited a large KPMG project. Multiple teams had to be able to work on the project with rapidly changing technology. They solved these challenges by using microservices. Microservices are small applications, working independently and using own data.

They use Azure Service Phabric, ASP.Net Core, Angular, EF Core and NServiceBus. The speakers showed us the registration microservice SPA.

In their team, all team members test. The QAs know what test scenarios are covered by unit tests. The team has API tests for frontend and backend communication.

They started to test with Postman and Newman. Then they found AspNetCore.TestHost. Its main advantage is that not real DB is needed, it uses in-memory database. Their API tests run for seconds after each commit.

The micro services approach is expensive, it is suitable for big projects.

Performing Performance Testing – Why, Who, How – Step By Step

The speaker was Nedko Hristov and his blog is nedko.info.

Before discovering performance test tools, he has done performance testing manually, with 6 other colleagues, connected to a local machine.

Performance is an on-going process starting from planning. It is a mistake to test in the end of the project. If we found a performance bottleneck at the end of the project, it could mean deep architectural changes and code rewriting. Unfortunately, often it is underrated as the project has short estimates and/or no requirement for performance.

You should  know what metrics you need. Measure before and after changes.

Nedko showed us some screens/recorded demo from JMeter, Graphana and InfluxDB. The performance test results are written to the database during test execution.

Graphana dashboard has alerts so that you don’t break the whole infrastructure.

Automating Video And Audio In Video Management System

The speaker was Marta Magiera from Copenhagen. The automation of videos is not an easy task. You should take into account different hardware processors, different video codecs, different video and screen resolution, different hardware acceleration and several other factors.

Audio automation is also challenging because of different codecs, different frequencies (1000HZ, 3000HZ, 6000HZ) and different modes.

Performance is extremely important. The team went from no automation to automation.

They use Ranorex for UI tests, Jenkins CI and NUnit. The team was really dependent on hardware as you cannot run hardware acceleration on virtual machines. The challenge was to have stable FPS driver as not all cameras support all codecs. They use fake driver. They created test videos because you cannot compare two different videos. Performance depends on the amount of movement in video.

The team run all tests nightly, they are executed 12 hours. The team has 200 different tests that test frequency sample integrity.

For their audio tests they use VB-Audio virtual cable,  Audacity and Spek to measure the quality of sound. NAudio is open source .Net library that helps them with automation.

Visual Test Automation In Adidas – Practical Usage

The speakers were Nikolay Stanoev and Georgi Yordanov. The challenge the team faced, was to test 45 custom sites on 30 different languages. Visual testing is even more important for their customer than the functional testing. They release daily. German language translations often break the layout because of longer words. The web sites have 50 payment providers integrated. Visual testing in a broad terms is to compare two images.

They descreased by 30 percent the manual visual testing and had 0 critical and o major bugs in production since they started using Applitools.

They are several ways to do visual comparison in Applitools:

  • Exact pixel perfect
  • Strict pixel perfect (has color tolerance)
  • Content
  • Layout

Tips

  • Use layout for full page or  combination of strict and layout, use ignored regions for changing content.
  • Use user journeys for visual testing
  • Reuse or convert functional tests
  • Limit test execution to 10-15 minutes

Bugs found

In the first two months the tool found 53 issues. Some of the bugs were:

  • Missing or new content
  • UI changes that broke the layout because of libraries update
  • Translation issues

Benefits

  • Currently the maintenance cost per test is 0-1 minute.
  • With zero additional code they have fast results and increased confidence that they can release.
  • The maintenance is easy, the learning curve is short, a newbie can start to use the tool in  1-2 hours.
  • They delegate part of the test results checks to designer/other colleagues and saved time from not necessary communication.

They make screenshots on real mobile devices. Applitools is a paid visual testing solution.

This topic was similar to the one presented by Nikolay Stanoev on ISTA conference 2017.

Cypress Vs. Selenium

The speaker was Lyudmil Latinov and he made a very good comparison of the tools.

Selenium

  • Selenium architecture supports bindings for many different program languages. The driver-browser communication resembles the client-server architecture and is done via JSON.
  • Selenium is a library and requires additional setup.
  • Used mainly by QAs.
  • Has support for all browsers.
  • Relatively slow tests.
  • Wait for element is not very stable.
  • Supports parallel and remote execution.
  • Has great community.

Cypress

  • Cypress runs directly in browser, it has direct application access.
  • Developed for QAs and devs.
  • Supports travel back in time snapshot before every step.
  • Cypress is complete framework, contains Mocha, Chai and Sinon.
  • Cypress can be used for UI and integration test.
  • Has only JavaScript binding.
  • Only Chrome support.
  • The team is working on Firefox support. Edge is in the roadmap for development.
  • Fast tests, almost no delay.
  • No need for waits.
  • No parallel execution, no remote execution.
  • Cypress can record video.
  • Excellent documentation, not developed community.
  • Cypress cannot switch tabs.
  • Cypress can control the network traffic in browser. It can skip security limitations like CORS.

Both Selenium and Cypress can load extensions and manipulate cookies. Both are open source.

Although Cypress is not as mature as Selenium, the speaker prefers writing tests with Cypress.

Automation Vs. Intelligence – “Come With Me, If You Want To Live”

The speaker was Viktor Slavchev.

Why are we so special that we cannot be replaced? Testers could not be replaced by unit tests, by methodologies, by automation or by artificial intelligence.

The speculation that humans will be replaced by AI are based on movies. We are not doing simple executable and repeatable steps. The testing has social nature. Quality and risks are social concepts. Just like money.

AI is good for facts and rules, partially also for couching. AI is not good at creative tasks.

Team Motivation 3.0: Burying The Carrot And Stick

The speaker was Aneta Petkova. Motivation has its history.

Version 1.0 of motivation was the motivation to survive physically. Motivation 2.0 is the motivation to thrive with society. If you receive a reward you will do more of what you did. If you get punished you stop doing what you did.

There are elements that are defying the matrix, like open source software (Selenium, Linux, JMeter). They are very successful and millions of people use them every day. But their developers and maintainers receive no money.

The speaker told us about the so called candle experiment. There was reward and no reward group. The task was to use objects in a non conventional way. Reward group needed longer time to solve the task, because part of their brain was occupied by thinking of the reward.

The next was the kindergarten experiment. There was fine for any late pick up in a kindergarten. The parents started to come late more than before the fine was introduced. They thought that they had the right to come late as they have paid for the late pick up.

Money is not always the answer. It is the same as drugs. You should increase the dose regularly so that it has effect.

Motivation 3.0 is to drive enjoyment or accomplishment. This is the intrinsic motivation, connected with autonomy, mastery and purpose. Being a team starts with building trust. Be on time and get the work done. Communicate what you want to accomplish. People are equal but not same. You need to recognize strengths and weaknesses. Give choice and do not project your opinion on others.

You manage things, you lead people.

Lightning talks

Anton Angelov mentioned the Meissarunner.com tool. He will present it as Selenium conference speaker in India this year.

Vasil Tabakov message was to spread the quality into the society.

Claudiu Draghia talk was about how we should learn from our mistakes. We all are making mistakes as humans and testers. We are afraid to make mistakes. But this was not the case when we were younger. Learn from mistake and do not repeat it. Nobody is perfect. Prepare to make another mistakes.

Summary

This article includes the summary from the QA Challenge Accepted 4.0 conference presentations. If you want to learn more details, you can check this YouTube channel. There are videos from the previous years so my expectation is that the organizers will publish soon the videos from this year.

ISTA 2017 Highlights Day Two

ISTA 2017 Highlights Day Two


I was pleased to attend ISTA conference 2017 in Sofia, the capital of Bulgaria. My previous article was about the presentations in the first ISTA day, this article will be for the lectures that I attended in the second conference day.

Are You Managing?

Steve Odart presentation was about management and the balance that should be achieved in every project. 94% of all bugs belong to the management, because it failed to prevent the problems. Most managers do not receive formal training, they must learn on the job. So most just follow the pattern they have seen before. There are 3 main questions that every manager should be able to answer.

  1. Do you know what you want?
  2. Do you know if you are getting it? You should measure the progress, so that you know whether you achieve your goals.
  3. If not, what are you doing about it? If you are not on the right track, you should change something.

You should find the balance between the triangle of time, resources and project scope.

There should also be balance between coding features, technical debt and continuous improvement.

We are in bad situation when we have debt that is with increasing trend. The debt is created by:

  • Deferring defects
  • Poor architecture/implementation
  • Poor quality control, lack of automation

Your goal should be to get a little bit better every day.

When you try to motivate your manager to change something you should show the effect of this change in terms of money.

The value trap is to think that the more features we deliver, the more money we receive. The feature is not only code, it is also documentation, support and automation.

The future of management is that we will all become managers. There will be self-directed teams, they will have a coach, not a manager.

The Ultimate Feedback Loop: How We Decreased Customer Reported Defects by 80%

Emanuil Slavov is an experienced speaker. He shared  his experience  about the root cause behind the incidents, reported by customers in 2,5 years. Emanuil was able to collect the statistics, because they link the defect fix with the ticket from the customer in Jira. He asked a question,  “If a defect goes unnoticed by customers, is it a defect?” There could be different points of view, but if a defect is reported by the customer, we should investigate it. The team does not have many algorithms in their code, but has 60-70 services. The most expensive defects are the ones that prevent us from working on features.  Third party services (Facebook, Instagram) caused around 10 % of defects. 44 % could be observed in front-end, 56 % in back-end. 38 % of all defects were regression bugs.

13 % of the defects could have been prevented by appropriate unit tests. 72% have been in methods with cyclomatic complexity of 3 and above. Cyclomatic complexity two means that there is a single conditional statement (if-else). 82% of the bugs were introduced in methods with more than 10 lines of code.

This shows that 100 % coverage is not needed, getter and setter tests would not have been effective in Emanuil’s team.

36 % of the bugs could have been prevented by API tests. 21 % could have been caught on UI level.

The rest 30 %?

13+36+21 does not equal 100, because the rest 30 % could never be detected in their test environment. The 30 % could only happen in real world. They were caused by edge cases, configuration issues, not intended usage of their software and incomplete requirements.

His advice was to forget this 30 %. Focus on early detection and fast recovery.

The customers found 31 % of the defects in less than 1 month after they have been introduced, 51% in less than 2 months. Developers fixed 48 % of the bugs in 1 working day, 69 % in one working week.

Static code analysis could have caught 2,3 % of the bugs (SonarQube, ESLint). Emanuil stated that it was not worth the investment. 6 % could have been prevented if Java was used instead of PHP.

Emanuil introduced 5% of the bugs, because he made incorrect assumptions about the business logic.

Recommendations

Developers should fix their own bugs, this is how they learn. You should have at least one test written per method. He advised us to do sanity manual checks even for the smallest fixes. He recommended mandatory code reviews and testing with boundary values. The speaker also told us that monitoring and fixing exceptions found in production logs is essential. He monitors exceptions logs after each execution of automated tests and for him the run is not successful, if any exceptions is present. His team have implemented custom checks in the database for data quality.

His advice on how to start decreasing the issues reported by the customers was:

  • Allocate time;
  • Figure out what you want to track;
  • Track customer reported defects;
  • Include defect id in the commit message;
  • Investigate immediately;

Writing reliable web application UI tests

Maxim Naidenov started with one typical problem. The feedback for developers comes slowly, because the execution of the automation tests takes time. The local reproduction could be hard, retesting could be slow. Asking developers to write UI tests is not a common practice. This is  because UI tests:

  • run in a browser;
  • are flaky;
  • web application and UI tests could be written in different languages;
  • require external tools and complicated local setup;
  • hard to debug;

Maxim introduced us SAPUI5. The open source version is called OpenUI5. The difference between SAPUI5 and OpenUI5 are several SAP specific widgets and the license. OpenUI5 applications are responsive accross browsers and devices. OpenUI5 provides UI testing framework named OPA (one page acceptance). The scripts are used for testing UI5 -based applications. OPA scripts are browser based, they are not executed with Selenium.

OPA test is started from an html page. The script starts and stops the mocked application. The test is a sequence of WaitFor calls.

My opinion is that OPA has severe limitations and could be used only for OpenUI5 applications. You can read more about the limitations here.

Groom or doom – why junior talent is the true fuel for your growth

Svetozar Georgiev shared his several-year experience with the IT job market and Telerik Academy students.

HR agencies are like a brokerage service. They do not find new talents, they distribute the existing ones between the IT companies. The demand is high and the dangers are that compensation will be high and there will be job hopping tendencies.

The IT-system-friendly way to hire is based on training and on-boarding new people. You can shape them in a way you like. If you motivate and properly mentor juniors, they will advance very quickly. They know that they do not know much, and are eager to learn. You can protect and evolve your own company culture. Career switchers are also very important. Typically they have higher motivation for success, lower ego, reasonable expectations and life maturity. Short-term effort for training and mentoring is worth it, in order to achieve the long-term growth.

You should encourage junior curiosity and to allow for “controlled failures”. The juniors learn the best if they work on real projects. But you should protect the project and themselves from their mistakes. The customers should be informed that they are trainees. Ensure juniors get enough attention. Stimulate them to learn outside of office hours. Give the juniors the bigger picture. They should know why they do something.

Once you have many seniors, you could move them to other team. Rotate people and instill acceptance for change.

The juniors training is  a good investment, not expenditure. Develop new talents, do not steal them from other companies. This is the sustainable, cost-efficient, controlled and responsible way to grow.

Running Kubernetes at scale across multiple cloud platforms

Iliyan Nenov spoke about Kubernetes. Shipping containers were invented in 1956, Docker in 2013. Kubernetes is highly efficient, lightweight, cloud ready, easy to assemble and has standardized delivery. Kubernetes supports Rocket and Docker portability. Containers are deployed 900 times faster than virtual machines.  Containers utilize the CPU 6 times better. All containers on a host machine share same IP. Container management platforms resolve that challenge. TCP proxy per container provides IP per container.

There are more than 60 certified Kubernetes solutions. You can run Kubernetes yourself, use certified Kubernetes distributions or use managed Kubernetes services like Azure public cloud.

Amazon, Google, Microsoft and Dell invest around 10 billion dollars per year in infrastructure and releasing features. They provide Kubernetes services. Cloud lock-in is when there is a huge cost to switch to another cloud service provider.

If you have competitive data and algorithms, they should be left in house.

Kubernetes has the strongest chance to become a standard for cloud portability.

Production monitoring and analysis in the cloud

Nasko Georgiev and Mladen Savov presented us their opinion on active and passive monitoring of applications.

Good performance attracts new customers. Bigger load time means decrease in revenues. The ideal situation is when you can run your tests from multiple world locations.

Passive monitoring requires actual traffic, you can monitor it within the corporate network, with client-side beacon and local instrumentation. Passive monitoring depends on real users, has performance impact on the business. Cannot measure in pre-production.

Active monitoring simulates user behavior. You can do it with Selenium based scripts in the cloud. There is no need for actual traffic. The Selenium scripts cover business needs and scenarios. There is no additional load on production systems. The active monitoring has limited number of paths, could be useful as a part of the CI, for early detection of issues.

Nasko and Mladen use Neustar, they set test time-out and measure test execution time between test.BeginTransaction and test.endTransaction calls. They use BigPanda dashboard for smart alerting. If they receive 1000 alerts for one problem, only one message will be sent.

Speakers use two methods for catching performance issues. The first is 3 strike rule, which is to wait for 3 consecutive errors for functional tests or 3 response times above the set threshold. The second option is to count errors during a period of time.

The conclusion was that it is better to combine both active and passive monitoring.  You can measure SLA.  It is best if you compare performance in different locations and different browsers.

Making Sense of Big Data through machine learning and statistical modelling

Dimiter Shalvardjiev made a presentation about machine learning techniques. Probabilities are more efficient than strict rules over large data sets. We can have labelled, unlabelled and mixed data. For example, labelled is to know which samples are with cancer cells, and which are not. Experienced doctors were able to predict in 48 % of the cases whether the patient has cancer or not. With machine learning the percent was 76.

Customer segmentation is an example for unlabelled data set. There is no binary differentiation. Fraud detection is mixed. One real life problem, that benefit from machine learning is sales maximization. Up-selling suggestions are based on historic behavior. Spam is no longer working, only 5% of it is efficient.

Do not use machine learning for pricing as it is perceived unfair. Also do not use machine learning techniques for customers re-targeting (to sell different product than the initially planned).

Summary

From my point of view the ISTA conference is mostly suitable for junior to intermediate level QA  engineers. If you expect the presentations to be more technical, you will be disappointed. I did not see preliminary voting for the lectures or speakers. It would also be nice if the videos and the slides were published right after the event. Currently, more than a month after the end of ISTA conference 2017, they are still not available.  Here is the link to ISTA conference YouTube videos. I hope you find this article useful!

ISTA 2017 Highlights Day One

ISTA 2017 Highlights Day One


I was pleased to attend ISTA conference 2017 in Sofia, the capital of Bulgaria. The conference goal is to present innovations in software technologies and automation. In this article I will write my personal opinion about the lectures.

There have been 3 tracks with lectures that run simultaneously, so I choose the most interesting from my point of view. The 3 halls are named confusingly. Their names are Alpha, Beta and Panorama. Beta is the largest, Alpha the middle, and Panorama is the smallest. The organization of the event was excellent. I hope the videos and slides will be uploaded soon, so I could catch up with the lectures that  I found interesting, but I was not able to attend, due to my attendance in another track.

Day One

Innovate, automate, accelerate

Birger Thorburn told us about his recent project. He has moved from a project with release cycle of 5 years to a project that should deliver new product after 12 months. On top of that the clients were top 25 largest banks. The team was located on 3 continents.

The team managed with the tight deadlines through the means of automation. They setup continuous deployment. They used every night hundreds of virtual machines (VMs) to run the tests. The VMs were with clean state. They were up before the start of the tests, and shut down after the test execution.

It is very important to track the progress in such progress, so that you will know when you have to panic. The release 1 of the product is just the beginning, his team goal is to have a release every month. It is not important which Agile methodology you will use. Use the one that suits you best. His preferences were towards Kanban. You should automate everything.

One of his main points was that smart people will transform the future through technologies. Working with software should not make us forget that there is a physical world that needs our attention too.

Unfortunately he did not give us many details about the project and the tools that they have used to achieve the automation. He mentioned Kubernetes, Graphana/ Splunk, TerraForm, Kafka in one of his slides. The speaker was a little bit quick when deciding that there were no questions. I personally heard the disappointment of the guy next to me.

Security, Big Data and other challenges to the IoT

Martin Harizanov stated that IoT (internet of things) has been around for decades, however it was known before as “connected computers”. IoT are not only physical objects, connected to the Internet.

Security

One of the security challenges is to ensure that only authorized devices are connected to your cloud service. He mentioned HMAC verification. The users should see only their devices. Security patches should be properly installed. Devices should be updated regularly. Unfortunately not all devices could be updated at the same time. Martin mentioned FOTA. Hackers tried to find weaknesses in each of his projects, within 2 weeks of its start. Hackers goals is to turn devices into zombies, to get unauthorized access to personal info and/or denial of service.

Martin gave us an example for API weakness in high-class Chinese camera. A hacker was able to easily collect data about 300 of these cameras. User IDs were sequential, so it was easy to get one, and then to just increase the value by one. Unencrypted communication channel (HTTP) exposed in plain text the credentials of the users.

Effects of security breaches in the IoT world are very similar to the other IT sectors:

  • Loss of confidence in the company;
  • Financial losses;
  • Personal data loss, that is not harmless;
  • Legal implications.

The speaker attributes the causes of the security issues mainly to tight time-frames and poor architecture design.

There should have been protection in place so that it is not possible to get data from 300 cameras from a single IP address. You should scan the logs from time to time in order to find attack attempts.

Even banks get hacked, but you should reduce and mitigate the risk.

Big Data

Some of the big data challenges are the constantly increasing volume of data, the data quality, the data diversity, data relations, security.

You should put effort in data encryption, data anonymization, encryption, rolling code signature.

Overall nice lecture, the speaker was enthusiastic and knowledgeable.

Overcoming the diversity of smart devices

Alexander Kostadinov and Dimitar Ivanov gave us an insight of the dev perspective of IoT diversity.

2017 is with focus on monetization of IoT systems. Many small providers disappeared. Speakers gave us some nice example for IoT devices like smart kitchen, smart door and bike locks, irrigation controllers, home energy monitors, etc.

Most widely spread ways of device communication are REST, CGI and BLE.

Alexander and Dimitar mentioned problems that were not catched on simulators. It is is extremely important to have the actual device well in advance.

Some of the issues that they faced were:

  • Broken devices, that cannot be reset due to lost key during update;
  • Overloaded network;
  • Interference when too many devices are communicating on one frequency protocol;
  • Bugs in device firmware, that could be fixed slowly;
  • Protocols and the devices are too much;
  • Every vendor has own API;
  • Even if the vendor is the same, there could be different APIs for the different devices;
  • The automation is hard.

The “solution” is:

  • Higher abstraction level, there are open source solutions;
  • Distributed teams;
  • Make the integration of new devices easy;
  • Contribute to standardization discussion work-groups.

The future of computing

Laurent Bugnion made an interesting presentation. Innovation is in cloud computing. The smart devices are more and more affordable. Man will be judged by not what he knows, but by how fast he could find the answer.

Blockchain is not about crypto currencies. It is about storing info in decentralized fashion. The data cannot be changed and everybody has the same copy.

The speaker presented us several Cortana demos. He set a reminder with his voice and Cortana’s help to remember to call his father when he returns home. Which is interesting because Cortana should use his geolocation to determine when to display the reminder.

Laurent showed us several demos that used Azure services. Serverless means that you don’t have to worry about the server, not that there is no server. Somebody else takes care of the servers, you just use the services.

There was demo with generating thumbnails from pictures. Emotion API uses picture as an input and returns the probability that this picture shows certain emotion, like happiness for example.

There was demo with added floating astronaut above the audience.

Laurent showed us holographic building, he extracted some holo-pipes and the schemes for them. From my point of view, that could be of great help for future engineers.

He showed us holographic chat with HoloBeam.

It was really interesting presentation.

Testing without borders

Tania Vladimirova spoke about testing without borders. This is testing with free, proven, open source, easy, simple, portable and scalable  solution. She mentioned JFrog artifactory. I found only paid version with free trial for it. Her team use JFrog for packing the test environment (ruby gems, ready to use Docker images). They execute functional tests in parallel.

You should start the automation from SCM (source control management) system.

They use Cucumber on Ruby and Docker. Jenkins is used for the continuous delivery. Tania mentioned OpenShift, Kubernetes, Ansible and Chef, Selenium and SoapUI. You could use Zabbix for monitoring, it is an open source tool.

I was disappointed by this lecture. I was expecting a demo, but there have been only slides.

Make it visible

Nikolay Stanoev shared his experience from several years of visual testing.

They had functional tests coverage of around 80 %, but also visual issues in production like broken layout, due to content changes and regression bugs.

Selenium + Image Magick

His team wanted to have cross browser visual tests that are easy to write and maintain. They wanted to integrate it with theit existing framework. They started with Selenium and Image Magick. Their solution used free tools but was epic fail. They had too many false positives. This was a result of content changes. They have too many dynamic data (text and data changes), shiny animated elements, GIFs  and image carousels. This content was not under their control.

Their second approach was to mark the problem areas and to exclude them from the comparison. The main issue with their second attempt was the increased maintenance cost and the increased code complexity (too many ifs).

Applitools

They switched to a paid solution by Applitools.

For almost a year they sent false positives to Applitools and they provided fixes. Nikolay likes the following features:

  • Many programming languages you can choose from;
  • Different match options, for dynamic data they use layout comparison;
  • Easily ignored regions;
  • You can compare single element on the page;
  • The comparison is full page, not only the visible viewport;
  • You can compare floating elements, that you know are on your page, but you don’t know exactly where.

Nikolay showed us a short demo. Currently his team spend 0-5 minutes per day for maintenance of their visual tests. They have 30% reduced testing time. The team uses Applitools for bug fixing testing. They have baseline (golden) image for each browser. Nikolay found that the tool is not useful during redesigns. In such big projects he recommended to not run visual tests at all.

I recommend you seeing this presentation if you need to execute visual tests against web application.

Automating Web Security Testing

Yavor Papazov gave us an excellent  presentation on web security testing. The rapid development is often affecting the operational stability in terms of security. As general rule security mistakes do not have fast feedback. Developer can introduce security bug, that can go unnoticed for years. So there are two options to manage security:

  1. Resilience instead of security. Give up security at release and work to improve it afterwards. Chaos Monkey was mentioned as a tool to test resilience.
  2. Ensure security is embedded early along the software production pipeline. That leads to automating the security testing.

Security test case have negative requirements, that are harder to test that the positive. For example, evil hacker should not be able to login.

“Make it secure” is not well defined. There are projects that try to define common security weaknesses.

Mentioned tools:

There was a demo with Strict Transport Security header. It tells the browser to connect only under HTTPS. This is better than server redirect.

Mozilla SSLyze was mentioned as a tool for TLS testing.

XSS vulnerabilities can be automated, but not fully, as the possible test cases would be a huge number.

Unfortunately there are no ready-to-use recipes that could work for all. Yavor’s advice is to start small, as advocated by the Agile methodology. We can translate some security vulnerabilities into functional tests. Have metrics in place as a starting point. The automated security testing will be a standard in the future.

My next article is for the second conference day of ISTA 2017. Don’t miss if you found the first-day article useful.