ISTA 2017 Highlights Day Two

ISTA 2017 Highlights Day Two

I was pleased to attend ISTA conference 2017 in Sofia, the capital of Bulgaria. My previous article was about the presentations in the first ISTA day, this article will be for the lectures that I attended in the second conference day.

Are You Managing?

Steve Odart presentation was about management and the balance that should be achieved in every project. 94% of all bugs belong to the management, because it failed to prevent the problems. Most managers do not receive formal training, they must learn on the job. So most just follow the pattern they have seen before. There are 3 main questions that every manager should be able to answer.

  1. Do you know what you want?
  2. Do you know if you are getting it? You should measure the progress, so that you know whether you achieve your goals.
  3. If not, what are you doing about it? If you are not on the right track, you should change something.

You should find the balance between the triangle of time, resources and project scope.

There should also be balance between coding features, technical debt and continuous improvement.

We are in bad situation when we have debt that is with increasing trend. The debt is created by:

  • Deferring defects
  • Poor architecture/implementation
  • Poor quality control, lack of automation

Your goal should be to get a little bit better every day.

When you try to motivate your manager to change something you should show the effect of this change in terms of money.

The value trap is to think that the more features we deliver, the more money we receive. The feature is not only code, it is also documentation, support and automation.

The future of management is that we will all become managers. There will be self-directed teams, they will have a coach, not a manager.

The Ultimate Feedback Loop: How We Decreased Customer Reported Defects by 80%

Emanuil Slavov is an experienced speaker. He shared  his experience  about the root cause behind the incidents, reported by customers in 2,5 years. Emanuil was able to collect the statistics, because they link the defect fix with the ticket from the customer in Jira. He asked a question,  “If a defect goes unnoticed by customers, is it a defect?” There could be different points of view, but if a defect is reported by the customer, we should investigate it. The team does not have many algorithms in their code, but has 60-70 services. The most expensive defects are the ones that prevent us from working on features.  Third party services (Facebook, Instagram) caused around 10 % of defects. 44 % could be observed in front-end, 56 % in back-end. 38 % of all defects were regression bugs.

13 % of the defects could have been prevented by appropriate unit tests. 72% have been in methods with cyclomatic complexity of 3 and above. Cyclomatic complexity two means that there is a single conditional statement (if-else). 82% of the bugs were introduced in methods with more than 10 lines of code.

This shows that 100 % coverage is not needed, getter and setter tests would not have been effective in Emanuil’s team.

36 % of the bugs could have been prevented by API tests. 21 % could have been caught on UI level.

The rest 30 %?

13+36+21 does not equal 100, because the rest 30 % could never be detected in their test environment. The 30 % could only happen in real world. They were caused by edge cases, configuration issues, not intended usage of their software and incomplete requirements.

His advice was to forget this 30 %. Focus on early detection and fast recovery.

The customers found 31 % of the defects in less than 1 month after they have been introduced, 51% in less than 2 months. Developers fixed 48 % of the bugs in 1 working day, 69 % in one working week.

Static code analysis could have caught 2,3 % of the bugs (SonarQube, ESLint). Emanuil stated that it was not worth the investment. 6 % could have been prevented if Java was used instead of PHP.

Emanuil introduced 5% of the bugs, because he made incorrect assumptions about the business logic.


Developers should fix their own bugs, this is how they learn. You should have at least one test written per method. He advised us to do sanity manual checks even for the smallest fixes. He recommended mandatory code reviews and testing with boundary values. The speaker also told us that monitoring and fixing exceptions found in production logs is essential. He monitors exceptions logs after each execution of automated tests and for him the run is not successful, if any exceptions is present. His team have implemented custom checks in the database for data quality.

His advice on how to start decreasing the issues reported by the customers was:

  • Allocate time;
  • Figure out what you want to track;
  • Track customer reported defects;
  • Include defect id in the commit message;
  • Investigate immediately;

Writing reliable web application UI tests

Maxim Naidenov started with one typical problem. The feedback for developers comes slowly, because the execution of the automation tests takes time. The local reproduction could be hard, retesting could be slow. Asking developers to write UI tests is not a common practice. This is  because UI tests:

  • run in a browser;
  • are flaky;
  • web application and UI tests could be written in different languages;
  • require external tools and complicated local setup;
  • hard to debug;

Maxim introduced us SAPUI5. The open source version is called OpenUI5. The difference between SAPUI5 and OpenUI5 are several SAP specific widgets and the license. OpenUI5 applications are responsive accross browsers and devices. OpenUI5 provides UI testing framework named OPA (one page acceptance). The scripts are used for testing UI5 -based applications. OPA scripts are browser based, they are not executed with Selenium.

OPA test is started from an html page. The script starts and stops the mocked application. The test is a sequence of WaitFor calls.

My opinion is that OPA has severe limitations and could be used only for OpenUI5 applications. You can read more about the limitations here.

Groom or doom – why junior talent is the true fuel for your growth

Svetozar Georgiev shared his several-year experience with the IT job market and Telerik Academy students.

HR agencies are like a brokerage service. They do not find new talents, they distribute the existing ones between the IT companies. The demand is high and the dangers are that compensation will be high and there will be job hopping tendencies.

The IT-system-friendly way to hire is based on training and on-boarding new people. You can shape them in a way you like. If you motivate and properly mentor juniors, they will advance very quickly. They know that they do not know much, and are eager to learn. You can protect and evolve your own company culture. Career switchers are also very important. Typically they have higher motivation for success, lower ego, reasonable expectations and life maturity. Short-term effort for training and mentoring is worth it, in order to achieve the long-term growth.

You should encourage junior curiosity and to allow for “controlled failures”. The juniors learn the best if they work on real projects. But you should protect the project and themselves from their mistakes. The customers should be informed that they are trainees. Ensure juniors get enough attention. Stimulate them to learn outside of office hours. Give the juniors the bigger picture. They should know why they do something.

Once you have many seniors, you could move them to other team. Rotate people and instill acceptance for change.

The juniors training is  a good investment, not expenditure. Develop new talents, do not steal them from other companies. This is the sustainable, cost-efficient, controlled and responsible way to grow.

Running Kubernetes at scale across multiple cloud platforms

Iliyan Nenov spoke about Kubernetes. Shipping containers were invented in 1956, Docker in 2013. Kubernetes is highly efficient, lightweight, cloud ready, easy to assemble and has standardized delivery. Kubernetes supports Rocket and Docker portability. Containers are deployed 900 times faster than virtual machines.  Containers utilize the CPU 6 times better. All containers on a host machine share same IP. Container management platforms resolve that challenge. TCP proxy per container provides IP per container.

There are more than 60 certified Kubernetes solutions. You can run Kubernetes yourself, use certified Kubernetes distributions or use managed Kubernetes services like Azure public cloud.

Amazon, Google, Microsoft and Dell invest around 10 billion dollars per year in infrastructure and releasing features. They provide Kubernetes services. Cloud lock-in is when there is a huge cost to switch to another cloud service provider.

If you have competitive data and algorithms, they should be left in house.

Kubernetes has the strongest chance to become a standard for cloud portability.

Production monitoring and analysis in the cloud

Nasko Georgiev and Mladen Savov presented us their opinion on active and passive monitoring of applications.

Good performance attracts new customers. Bigger load time means decrease in revenues. The ideal situation is when you can run your tests from multiple world locations.

Passive monitoring requires actual traffic, you can monitor it within the corporate network, with client-side beacon and local instrumentation. Passive monitoring depends on real users, has performance impact on the business. Cannot measure in pre-production.

Active monitoring simulates user behavior. You can do it with Selenium based scripts in the cloud. There is no need for actual traffic. The Selenium scripts cover business needs and scenarios. There is no additional load on production systems. The active monitoring has limited number of paths, could be useful as a part of the CI, for early detection of issues.

Nasko and Mladen use Neustar, they set test time-out and measure test execution time between test.BeginTransaction and test.endTransaction calls. They use BigPanda dashboard for smart alerting. If they receive 1000 alerts for one problem, only one message will be sent.

Speakers use two methods for catching performance issues. The first is 3 strike rule, which is to wait for 3 consecutive errors for functional tests or 3 response times above the set threshold. The second option is to count errors during a period of time.

The conclusion was that it is better to combine both active and passive monitoring.  You can measure SLA.  It is best if you compare performance in different locations and different browsers.

Making Sense of Big Data through machine learning and statistical modelling

Dimiter Shalvardjiev made a presentation about machine learning techniques. Probabilities are more efficient than strict rules over large data sets. We can have labelled, unlabelled and mixed data. For example, labelled is to know which samples are with cancer cells, and which are not. Experienced doctors were able to predict in 48 % of the cases whether the patient has cancer or not. With machine learning the percent was 76.

Customer segmentation is an example for unlabelled data set. There is no binary differentiation. Fraud detection is mixed. One real life problem, that benefit from machine learning is sales maximization. Up-selling suggestions are based on historic behavior. Spam is no longer working, only 5% of it is efficient.

Do not use machine learning for pricing as it is perceived unfair. Also do not use machine learning techniques for customers re-targeting (to sell different product than the initially planned).


From my point of view the ISTA conference is mostly suitable for junior to intermediate level QA  engineers. If you expect the presentations to be more technical, you will be disappointed. I did not see preliminary voting for the lectures or speakers. It would also be nice if the videos and the slides were published right after the event. Currently, more than a month after the end of ISTA conference 2017, they are still not available.  Here is the link to ISTA conference YouTube videos. I hope you find this article useful!