Towards a Future of Self-Testing Systems
Tariq, can I ask for few suggestions on what techniques to think about when trying to make a test steps generator ? (Quite abstract question I know) But I am trying to think of something which can observe my test steps and learn to suggest cases around it. Any suggestions/hinds/direction, I will be thankful.
When you say observe your test steps are you referring to your steps via the UI
Are you talking web, mobile, desktop? I can answer in a general way if that's better
Yes, generic will be good.
well the first thing is that you'd need to be able to capture those steps, I would say if you are wide open and it's something like a web app then a chrome plugin could be a viable way to go
there's obviously some existing tools that can capture information but it's usually very specific to a tool
so all these record and playback tools
usually do that but then the output is very specific to a scripting language under the hood
so if you wanted something to capture your steps at a high level you'd have to first define a language that helps you express those steps
determine the level of abstraction
like is it that I clicked this element that happens to be a submit button
or is the abstraction you want in a step "to submit the form"
sounds picky but its an important distinction if you want to capture high-level vs. low level steps
one of the projects I worked on in the past was defining a domain-specific language for testing
allowed you to express test steps
high level first and then refine them with the details
oh I should probably be threading these replies shouldn't I
well there ya go I blew up the hub already
Ok, so once the steps are generated, cases can be combinations of probabilities under some constraints, right ?
ye it depends on what you are trying to build out... are you thinking that you would want to be able to have some sort of agent mimic your actions
or give you suggestions
guide your testing?
the way you would go about generating test cases once you have captured this information depends on your goal
No no, mimicking the steps is not needed, just record and suggest cases.
Like an example would be if the step is to enter password (which obviously the recording won't understand, but at high level it knows that some text was entered) and later we can tell it at low level that it was a password, So the assistant should tell me to try different sorts of password like correct, incorrect, short long etc
oh ok ye like a virtual test assistant
very cool idea first of all
somehow I remember talking about this a little with someone at a conference
Can I get a link to that conference please ? would like to see or listen..would be delighted to.
ye so for this you could capture probabilities but also have some sort of look ahead algorithm
look ahead algorithm means ?
one possibility is for you to build a crawler that can explore the application and build a model
and so what it does to look ahead is more that it knows that certain screens in the application exist
reachable via certain actions
and if you've run any sort of tests on that application before or other folks are testing it
then their actions can contribute to a probabilistic model
so when I say it looks ahead, it basically looks at where you are at in the application and compares your next possible actions with that probability model
and says hey, after clicking the login button most folks do this
great idea !
The probabilistic means that the actions don't necessarily lead to the same outcome each time. In other words, the submit button doesn't submit the form if you get some kind of validation warning?
Is that right?
or if it can't suggest anything to you at the time because it doesn't have enough data it let's you do your thing and takes your actions as input to the model
@Kevin Thomas probabilistic doesn't necessarily mean non-deterministic
what it does imply though is that it will be adaptive
in other words if most people are tending towards certain actions then it will give you the same outcome
but once that changes to the majority doing something else, it will adapt
hopefully I'm making sense here
It sure does !
so if the test being run was a negative test that says hey it shouldn't submit
then not just the action, but all the previous actions that led to that transition would be part of the model
so the suggestions would lead him/her down a path of doing that negative test
sorry let me uncheck that send to channel deal, it was stuck
I think I understand. I'm building a Q Learning model where we take real-world application logs to have the agent replicate the overall steps that real-world users are taking. but it makes random adjustments to the action steps once the agent has learned a "positive" test.
oh ok now that is very cool
we're looking into one of the same kind of projects but at the lower level
That sounds really interesting. With web pages, there are relatively easy constraints for the agent's action space. In services/api work, that seems hard. Am I understanding?
it's like the weather thing
everything is relative
the UI called the API testing hard
the API called the Unit Testing hard
but so is the UI problem, however I think your approach of learning from previous actions and then deviating
is the right path
its like what we do as humans to explore an application
It seems like a "feasible" way to go, but I still have some hard thinking' to do.
There are places the agent gets stuck for sure, and working through those is where I'm spending time right now.
Thanks @Tariq King for all the inputs. Will surely help me think in some direction towards my goal.
@Akash Anand no problem would be great to have you share your work some time, if you are interested in sharing at a conference venue and would like my help let me know
also in general training on test cases is a great direction
like having some base set of tests as your seed data
then generating variations off of those
Great direction ! Was thinking of something like this only.
re: @Akash Anand no problem would be great to have you share your work some time, if you are interested in sharing at a conference venue and would like my help let me know
Sure @Tariq King will be happy to share what I come up with. Will keep posted.
@Tariq King can you kindly posta link here about the conference you said earlier here https://techwellhub.slack.com/archives/CDSLXKJ6T/p1580307347074500?thread_ts=1580307040.072000&cid=CDSLXKJ6T
somehow I remember talking about this a little with someone at a conference
From a thread in #testing | Yesterday at 9:15 AM | View reply
sure I believe it was at http://hustef.hu
however, if you are in the US then I know StarEast/West and PNSQC also have AI tracks
HUSTEF last year had quite a bit of AI talks and they are all recorded and freely available on that site
Unfortunately I am not in the US. I am from India. But I try to join conferences and webinars online. Also would like to join online meetups if any. Please share any links which I can join. :blush:
k for sure...
one good upcoming conference that is virtual is the Automation Guild conference.... it's not free but it's not very expensive either and its great
Thanks. Will check out. I got its mail today morning or may be within a few days. Thanks for recommending.
Hey @Tariq King! Recently there have been lots of discussion about the regulation of #ai, what is your take on whether or not AI should be regulated and what does it mean for us as testers?
definitely needs to be regulated, like any other technology, if we are going to make sure we are safe and protected
I think what is happening more recently is that the widespread use of this technology is causing us to really question its capabilities (as we should) and how it can be used to cause accidental or intentional harm
so we definitely needs laws, regulations, monitoring systems, and to make sure there is accountability so that these systems are used responsibly
I think for us as testers it means that there is a big opportunity for us as professionals to contribute in a big way
even take some of the jobs in these fields... last year I was at the quest for quality conference in Dublin speaking with Davar Ardalan, founder of IVOW, speaking about this very subject.
Her company focuses on making sure AI is culturally aware for example
we both agreed that the testing mindset plays a key role in the success of AI/ML systems
diversity of thoughts, backgrounds and thinking
all of the traits she mentioned just reminded me of some of the best testers I know
Hey @Tariq King the whole talk about self healing tests in UI (Specific to selectors) and solving those issues via ML. How much of actual ML is used in these places ? Or these are just a combination of text similarity, heuristics and some regex ?
not much ML
but have to keep it real, this whole brittle selectors thing sometimes bothers me
not that it isn't an issue because it is, but AI/ML can provide is with so much more in terms of bridging the gaps in testing that promoting it only for self-healing UI seems like a waste at times
like we have truly hard problems like test data generation, the oracle, test selection etc.
most of the self-healing UI stuff that I've checked out, when I look at it is a lot of trial and error stuff
that doesn't really use much ML
but it's promoted in that way
:slightly_smiling_face: same thoughts. You put it so well. About the brittle selectors, we kind of separate styling(CSS) and data selectors.
that's why for me this year I'm pushing a whole new line of thinking about these "solutions"
I understand that adding ML term to the product works well for companies.
and trying to get folks to really think of biologically inspired testing solutions (something I am calling BITS).
gave a talk at a meet up last week in Portland trying to inspire that line of thinking
it's easy for folks to just scream AI/ML
Can you elaborate on it or post a link to that talk if available ?
but show me how your solution is truly something that represents a method, or way of solving a key problem using some biologically inspired solution
sure I think I can post the deck here in slack
it's pretty self-explanatory :slightly_smiling_face:
I believe self healing selectors are more of uniqueness ranking of selectors and selecting a locator query which is of a lower or equivalent rank when the 1st/preferred strategy fails. Am I correct @Tariq King ?
yep it's just a series of fallbacks
until it finds the right "element"
and then wow, we've healed
So, the algorithm has to fallback to the next best “element”, based on similarity, element type, div region etc.
ye I just wondered why we're so focused on healing selectors
it's not the core problem
its like i got a cut
and i'm bleeding
let me focus on the blood
and patching it up
as opposed to like, hey why did I get cut in the first place
UI testing is complex yes, but the current tools are brittle
let's build new tools that aren't brittle
using AI and ML
instead of using AI and ML to try to patch the brittle methods using by existing tools
Hi Tariq! Is AI really the future of software and software testing? Or is it all just hype?
A bit of both... there's definitely a lot of hype as there are with new technologies, probably a bit more so with AI
but it's because the general applicability of AI makes it so useful in many different industries
so I think with that breadth there comes quite a bit of hype, marketing, fear, etc.
The truth is that AI is the present.... a lot of this technology is already part of software that we use in our day to day lives
so it's a great question, do we dismiss it as hype, I think we can't, actually I've felt over the last year some level of social responsibility to represent AI/ML well in our space and have folks understand the truth behind these things
and even learn to be able to separate it from the hype, so that we don't dismiss it and then bad things happen
awesome thanks!! Sorry got pulled into a meeting.
I know at one of the conferences Jason told us not to fear AI that it is here and it's coming! so thanks for this info too!
@Tariq King , Are there areas in AI/ML that you feel the testing community could be spending more time in, or where you're just not seeing the expected interest level?
Not just the testing community, but in general there is not enough interesting in testing AI/ML systems
as usual we're using this technology, putting it into our products, and not considering the implications
so techniques for testing different ML approaches and models like neural networks
how do we even know we have coverage of these models
how do we deal with their dynamic nature
are there ways for us to generate edge cases or use equivalence classing to either get coverage or uncover issues with the nets, training, bias etc
even the AI/ML community still uses very primitive means for testing these models
accuracy, precision, recall, fscores
so testing community could definitely bring a lot to the table there
In terms of the vendors using AI/ML to automate testing there is also a lack of activity at the API/service and unit levels
everyone is doing UI testing
also most folks are tackling mobile at the UI level
so even there a huge gap exists for desktop web
Wow. A lot of areas to work on and think about there!
ye there's more in my mind but i paused lol
Good morning @Tariq King, generally speaking.. automation and manual testing can complement one another when testing/investigating systems. Usually for strategy. How can the addition of AI compliment automation and manual testing?
This is a great question... there are several ways that AI can help to improve what we do both from an automated and a manual perspective
some things cross the intersection of both
for example, test prioritization and scheduling
sorry for the delay there someone was at the door :slightly_smiling_face:
so for example there is some work out there on leveraging ML for predicting which tests to run
that could be leveraged both for manual testing and automated test selection and coverage
Also a lot of the advances in image recognition can clearly help with visual testing
more stable mechanisms for automated UI testing
also for manual testing a good suggestion that came up here even earlier today was like a virtual assistant to help guide testing
maybe give suggestions as to what to test next, which techniques
coverage, and using probability to provide test information to someone who is exploring the application
Thank you too, great response. I tried to keep it general enough because the only 2 things I know (today) are manual and automation ... generally speaking since it all depends on the system. Thanks so much for this. No worries on the delay we are all busy, these answers showed that AI can compliment manual and Automation testing. Thanks @Tariq King!
@Tariq King do I need a PhD to do AI-based testing?
in all seriousness to answer Jason's question, it really depends :slightly_smiling_face:
if you're trying to do some really far out stuff maybe a phd can be useful
but I think it's less about the PhD and more about some of the independent thinking skills
the creativity, and thinking outside of the box, the analytics, the math sometimes when needed
and combining that with practical skills of development
lots of tools out there that are making this type of work easier as well
with things like AutoML may not even need a technical background to train models and test things that normally you would need a whole data science team to do
hey @Tariq King, what skills do you think testers should be focusing on to prepare for the future of AI driven testing?
Testers should focus on the things that testers do best, questioning the validity of things
Bringing that mindset to table when it comes to AI/ML and using it for testing
helping to shape what these tools do, and since we may be the ones training them and using them most, what features would accelerate our testing
I think having an understanding of AI/ML helps
but it also doesn't mean that everyone needs to become an AI expert
and that folks will need to learn python
I think these tools are becoming advanced to the point where they are easier to use and to get value from them you may not need to even care how they are built
in fact if they are built right, it will all be transparent to the user
@Tariq King how would you define ‘self-testing’ for noobs?
The whole idea of self-testing is just designing the system with components that can observe their own behavior as well as execute tests to verify that behavior
so rather than building a script externally to do testing
you would make testing an inherent part of the design... in other words make testing a feature
isn't that TDD?
Not quite... TDD is more about designing the code and writing tests as part of that design activity
so you write a test
it fails and then you write code to make it pass
here I'm talking about writing code that actually allows the system to do testing
so the system can test itself at runtime
better with an example
we can pick one of jason's favorite examples
yes thank you I'm still missing the distinction
imagine that you had a search engine that could be used by anyone to look for items, activities, things etc
anything on the web
now let's say that engine is built using a neural network
and it starts to see that there is part of its network that is heavily used
or maybe that the connections are changing frequently
and it could run some tests on itself to see if some of its search expectations were still the same, or now producing different results
now it may not know if the new search results are intended or malicious
but it could determine that difference at first
like hey, searching for president no longer gives me donald trump for whatever reason
or searching for peach gives me impeachment
you get the idea :slightly_smiling_face:
With external test automation, we typically tend to run a set of tests as part of a production gate. A system that has self-testing capabilities built in is capable of executing tests against itself at runtime, when it is already in production. I think this is really useful when the system has some dynamic component that can change at any time -- e.g., an online learning algorithm, such as a neural net, like the example Tariq gave.
so the system self monitors in order to design new tests according to live production data
and runs the test in real time
but it needs to see something suspicious in order to trigger the design of those tests
yes, there is usually a monitoring aspect that is built into these systems
so this is some mechanism which is capable at targeting specific suspicious activity and designing ways on the fly to check behavior around that activity
sounds very interesting
being able to have the system monitor its own behavior is a key part of it
then after that trigger, it goes into a mode where now it investigates what is going on with a series of tests
in other words it just doesn't trust its own behavior
just like we don't trust what developers did :slightly_smiling_face:
the big difference here is it's a runtime activity not a design time activity
most of our testing now happens before we ship
few advanced folks are testing in production
but still very much a human involved
here the system is testing itself in production at runtime
and has the ability to question its own models, components, behaviors
right so the design of the self monitoring system is key
and maybe in the future even truly self-heal
so it knows when to look and it knows how to compose relevant self checks
seems like there's a danger of setting in motion a process which creates its own definition of reality
without being able to tell if that definition is actually true
so handles for human insight and adjustment of this process seem very important
think about it this way
right now we're starting to build systems that do just that
not with the purpose of testing
what I'm saying is that testing should be baked in as well
because a lot of the adaptive behavior we are starting to see in ML based systems
bots chatting with other bots, creating languages, cars driving themselves
doing all of that without some major safety and testing checks at runtime
is very dangerous
and the very real worry that AI takes over will only be solved if we start designing these systems but test themselves and to explain themselves
yes keyword 'explain'
i remember from Jason's recent talk, in his answer about debugging, he mentioned tracing
but the same danger you mention in the adaptive behavior systems will be present by definition in the self monitoring system within the system
in both cases, handles built for easy human insight and control seem to be pretty important
*and insight, control, and adjustment
yes very much so
just looked back at Jason's question though
and he said self-testing for noobs
so I would define self-testing for noobs as this: imagine if Jason could take care of himself :slightly_smiling_face:
instead of everyone else taking care of him (like what happens today)
good definition @Jason Arbon?
@Tariq King to piggyback on @Shak question - you did a talk just over a year ago about AI eliminating Manual Testing (watched it on youtube). You say that it is already here -also in that talk you touch on programs being able to "self test". Do you see the role of Software Tester going the way of say the Typesetter? and if not then how do you see the role evolving as AI/ML start to ramp up in our profession?
I think the role of the software tester will evolve in a number of directions
All of these talks recently about the regulation of AI etc makes me believe that folks with a testing mindset will be in great demand
and what we consider to be a role "testing software" may be a role validating AI/ML systems
making sure bad things don't happen
I do see testers moving into a place where the types of tasks they do change
and they are more using ML based systems and leveraging them to accelerate their testing
Thank you, I follow what you are saying but I also have a thought that with this evolution that it (testing) may become a niche field.
interesting... explain more
what would be the niche it would fall into
Sorry for the delay, I got pulled into something. I was actually thinking more about academia and how from a learning/training standpoint that AI/ML isn't necessarily a direct part of testing right now. Now of course as things evolve that may change but not everyone is going to be able to learn or transition from test software to validating AI/ML systems. (I hope I was able to convey that correctly)
Hi @Tariq King, thanks for answering all of these questions! You touched some on ML bias and the need for human intervention in a couple of answers just now, so this is a general question for that: how does AI become biased and how can we deal with that?
AI doesn't become bias per se, I would say it is inherently bias
Bias for me is a part of AI being good at and used for simulating human judgement
As we know, we all have biases, and therefore all the training data, information that is out there to build these systems will have different forms and types of bias
depending on when it is sourced, where it is sourced from, how it is sourced, who sources it
so the real question is how do we deal with bias
and leverage it for god and eliminate the bad
bias can be good in many ways, but we tend to not want to call it bias
for example a good recommendation engine is very biased
it is biased towards what you like
what you want
same with search engines
but a little more clear how it can have undesired bias
if it shows me results for folks assuming a particular demographic, culture, age
and that happens to exclude my age, my culture, my demographic
then that sort of bias is unwanted
however, the key here is that bias in AI is unavoidable
because bias in our world is unavoidable
what we need to do is get back to our roots as testers and make sure that given the context of the application of AI
its bias is appropriate for its users
Booklet on this topic if I may be so bold/rude: https://drive.google.com/file/d/17vOGkUNqsbtkwRIIoNaK1R2aONnLDDfY/view?usp=drivesdk
Thanks @Jason Arbon, yes great resource... you taught me a lot, think there's a video of your talk somewhere too
here it is https://www.youtube.com/watch?v=GkZmjkpF_28
YouTubeYouTube | AICamp
AISF19: Testing AI and Bias, by Jason Arbon, test.ai
Jason when are you taking over the hub?
Not invited to fancy places, have to crash the party. I’ll take the hint ;)
no hint... i'm sure you're invited
they just put me first to make sure its successful
You are preproduction ...
and you are that one bug that slipped into production
wow the hub is hot right now... it was quiet for a while, now I feel like I can't type fast enough
if folks are interested, there are a number of events this year that will have quite a bit of talks on AI and testing
there is testingstage.com in the Ukraine (March)
the EPIC Experience in San Diego in April
there's always StarEast and StarWest that have ongoing track themes around it
I'll also be at TestBash Detroit https://www.ministryoftesting.com/events/testbash-detroit-2020 and Agile Testing Days in Chicago (edited)
On the subject of AI and self-testing, how do you see it impacting holistic test strategies? Many company companies follow the test pyramid as a guideline for a test strategy and @Jason Arbon has talked about the testing layers and cake. Do you see self-testing changing how companies look at their overall test strategy?
@Jason Arbon just likes cake that's all
but in all seriousness, testing being a holistic activity isn't going to change
Not just levels of testing in the pyramid, but all the other dimensions of testing
AI/ML driven strategies and eventually self-testing will be . powerful because they can be applied multidimensionally
It's actually one of the problems I see now with some of the vendor approaches
they are too focused on one thing
not holistic enough
so we're still having to piece everything together
to answer the question I don't think it will necessarily change how companies view their overall testing strategy
once they have it right the strategy is the same
however how its implemented may be different
today we think of scaling a lot of these things with people right now
instead of with technology, which is inefficient
so I think at company level goal will be the same and strategy will be the same
implementation will be different
That's awesome. Especially since a lot of the AI for software testing that I have seen focus a lot, if not exclusively, on functional UI testing. I think it would be awesome to start looking at an AI for software testing solution that allows for multidimensional approach.
It seems like self-testing is only possible in systems designed and developed with it from the beginning or refactored to mate with it, which would probably be just as hard as starting over.
Do you see self testing algorithms being designed in central locations by vendors and sold to companies, or do you see it as a strategy or pattern for a company to use in its own software (presumably a rather wealthy company)?
I think it is actually possible to deploy self-testing outside of a system designed from scratch
I think that it's about building observability into the system first
and there are different ways to do that without the design needing to be ground up
all the work on creating seams in applications
which have also been applied to legacy
are good examples of where there are opportunities to enable self testing
I think what it takes is for someone with that type of knowledge to be able to analyze the system, the unique situation the company may be in
do you have a link to a good resource on 'seams' for me to get acquainted?
and do the work needed to build the right seams
there's a book
let me look up the link
its around legacy systems and has a chapter on seams
or a few chapters :slightly_smiling_face:
here it is
seams concept in there...
we are most familiar with object seams
doubles, mocks, etc...
but there are several other types of seems, e.g. preprocessor seams
seams in the filesystem etc
at the end of the day it's about making the system testable
of course that is better if designed in upfront
Thank you very much!
but it's not impossible either to get systems even legacy ones refactored or restructured with some seams that can help enable self-testing capabilities
I hope you don't mind if I ask another question now...
self testing seems to be more applicable for situations in which the behavior of the system is dynamic.
When the behavior is statically defined like in traditional development, why would self testing provide advantages over enabling more observability and traditional testing techniques (edited)
it probably wouldn't
i think your observation about it being more applicable to dynamically adaptive systems is right
one of the major things now is that with AI/ML a lot of the behavior of these are dynamic
so as more and more systems incorporate these technologies those it becomes an increasingly important subject
but for things that are more static
self-testing isn't the answer
now it's always good to have observability
but that's a separate benefit
*set of benefits
for testing in general
thank you, It's very helpful I think to distinguish what kinds of problems these new tools can help solve, so we don't rush towards a specific solution for any reason other than solving a defined problem
yes it is
otherwise we just take a shiny new tool to every problem
and waste a lot of time and money
@Tariq King , what do you think is the biggest problem in software quality today?
@Jason Arbon not enough people truly care about quality until it's too late
too much people talking the talk and not walking the talk
so our biggest problems are the humans
great way to close this off lol
now I'll be quoted as saying that I want to get rid of all the humans