AI Critical Review
Pushing the Breaks: A Critical Review of AI
Will Artificial Intelligence (AI) improve technology far beyond humanity’s capabilities? Is the answer to higher quality, higher productivity, compliance, and security in the code of AI systems?
Not entirely. Like emerging technologies, the seduction of new and powerful techniques can amaze and excite most people. There’s an equal chance of becoming extremely comfortable or extremely put off at the idea of technology that can handle what humans can’t—or no longer want to—accomplish in shorter amounts of time.
Giving that trust away blindly can be an expensive mistake. Here are a few details to consider as you look through AI options that could transform your business, industry, projects, or life in general.
Artificial Intelligence Risks: Promises and Dangers in
AI & Machine Learning Training May Be Rushed. Many current AI systems use supervised learning. This type of learning involves choosing and labeling specific data sets for targeted training. In short, developers can mark which types of data should be involved in a lesson.
A major risk in supervised learning is that rushed products could be trained using data sets that give good surface-level answers, but can’t detect many specific or out of the ordinary situations.
These cheaper and rushed AI systems are barely better than standard call center automated systems in that they can funnel many answers into a set list of symptoms, but can’t reason at deeper levels.
Essentially, they miss the point of AI and modern machine learning, by being only small steps ahead of chat bots, search engines with vaguely related results, and magic 8-ball apps.
How can you prove that AI testing produces consistent, useful results? While supervised learning is necessary because of how much truly random testing could miss, how can you prove that AI experts aren’t gaming the system at a level you simply can’t understand?
If you’re not part of the AI and machine learning community, there’s little you can do--and staring over someone’s shoulder isn’t likely to work as well as some think. You could try to learn a bit more about the AI industry, but unless you reach a lucky breakthrough, there are still niche and high-level details that aren’t available for public consumption.
There’s no easy answer. The problem with being at the edge of technology is there may be other experts who know how to put on a good show. Without a large population of experts who can investigate the issue for you, scams can run rife.
Thankfully, AI, machine learning, deep learning, and big data analytics are already becoming standard information and computer sciences. With both converting current professionals and education pipelines sending students toward AI, consultants are ready to fight for your best interests.
Hostile AI Injection. The concept isn’t that outlandish. If malicious AI developers want to cause your system to malfunction, they could leave certain phrases, commands, or other backdoor access methods in place.
Getting into a system that you programmed isn’t difficult. These days, it’s more about being as tricky as possible so that other programmers won’t find your backdoor entrances to your programs during due diligence.
If data sets can be supervised, they can be poisoned.
Standard cyber-security techniques are necessary, but there is more to think about. If your AI partners poison your data sets, how can you prove it? For the time being, you’ll need a cyber-security team trained in AI and machine learning to investigate.
But what if the attacks are external? What if there was a way to slightly suggest, influence, and nudge AI systems into certain responses over time?
There are times when AI systems are fooled by images that translate to something strange within the code. AI isn’t as advanced as the human brain (yet), and it’s possible to craft images that could trick the AI.
Such attacks aren’t just experimental. It already happens and has already become a fun trick for surface-level hackers and even internet enthusiasts looking for a laugh.
Something important to note is that images are just one type of input. There is always a chance that someone could write malicious code, convert it to another input (e.g., audio, video, image, etc), and damage the system.
As AI and machine learning progresses, developers will need to figure out not only how to mitigate these attacks, but how to shut down and redirect these attacks as simple errors, then continue on their way.