WeAreDevelopers Congress was happening in a few months back, in May, in the wonderful city of Vienna. I held back writing a sum up about my learning because I was waiting for the recordings to be public so that I could link my top picks. And I missed a couple of talks I was interested in, so I wanted to catch up on what I missed at that time. Sometimes it was really hard to choose which talk to join as there are lots of sessions in parallel, and in some cases, there was a high demand for some talks so that no more people were allowed to go into the rooms.
The list of my favourite talks:
Angie Jones: The Reality of Testing an Artificial World
The big question is, how do we know that the application works when it uses machine learning? How do we test this application? How to test when the outcome is not predictable since the outcome might be a range of validity.
Very first step: learn how to the application learns.
The system creates models based on what it learned. These models are constantly updated as the application learns more.
Second step: train the system.
Create an automation script with lots of data so that the application can create these models.
Third step: assert on the outcome.
There is no exact answer to what the expected outcome is. Use POCS method. Which is the Plain Old Common Sense.
It is difficult to write the found bugs down. It is more like writing a paper, it should consist of how we trained ML, why our expectation is what it is.
As technology advances, so must our test approaches, and there is still a need for critical thinking.
Tougher questions like ethics and biases in ML, my favourite topics were also touched on the talk.
Florian Grunow: (In)Secure Web Applications
These are case studies about security issues found in some applications, and about the ways of fixing these security vulnerabilities. This talk actually gave me great ideas for testing. It turned out input validation is not a silver bullet.
I learnt more about SQL injection, HQL injection, weak cookie implementation, cookie scope, active password autocomplete, Cross-Site Scripting and unauthenticated administrative functionalities.
Sebastian Schrittwieser: Security by Obscurity
These are case studies about how certain apps (Tinder, Superswap, Snapchat, WhatsApp) could be attacked. The presented vulnerabilities are already fixed. I stayed sitting after this talk for long minutes.
Days after the conference I did some research, I searched for answers to the question: how to raise awareness for the security questions from the planning phase and bake security into the application. I wanted to come up with solutions that could help to prevent the application that we build to be vulnerable. I found an interesting article on owasp.org site: Agile Software Development: Don’t Forget EVIL User Stories.
In this conference I learnt a lot, and I was grateful for the opportunity to be there. But there was something that I didn’t get. Beverage (water) were served in paper cups, which were only half full. When I asked for the waiter to refill my cup with some more water I was told to grab another one.
WeAreDevs conference is often called as the Woodstock of developers. Allegedly there were 8000+ attendees this year. While we were participating in brilliant talks like how we can save the endangered snow leopards with the help of ML, we created significant amout of trash during this three days. Anyone sees the problem here?
I went to a local meetup on the first day’s evening that was somehow tied to the conference. Angie Jones was one of the presenters. I took the opportunity to learn more from her.
Angie Jones: How to Get Automation Included in Your Definition
Definition of Done is something with which the team should agree together. Automation can be included in DoD, however it is not enough.
Engineers who write automate tests should be part of the teams. Automation is useful but not all tests should be automated. Automating a test takes time in itself and increases the size of the code base to be maintained. Decide which test case to automate.
Collaborate with business to find feature usages and risk areas.
In order to have the feature testable and automateble collaborate with frontend developers, ask them to put id-s on the elements you need.
Automate strategically. We have to be more clever. Search for shortcuts and use them where it is possible. Do not let your test being blocked by an irrelevant test that reveals an issue.
Build the test automation framework incrementally. Build that you need, not more.
This recording was made at another conference: