What to make of the threats from AI that jeopardize software that we all rely upon?

Recent news articles talk about instances where Anthropic’s Mythos AI is said to have found software flaws in pieces of software that have been around for a long time.  The pieces of software in which Mythos has been said to have found flaws were each from the open-source community.

What should we, as readers and users of software, make of this?  What should we do differently?  Should we avoid open-source software?  In this blog article I offer my thoughts.  I do think there are things that we, as readers and users of software, should do and not do.  But avoiding open-source software is not among them. 

The writers of the articles try to help the reader appreciate the risks we all face because Mythos attacks software that we all rely upon and finds bugs in it.

A separate issue, not the focus of the articles, is the question of open-source versus proprietary (closed-source) software.

One could pick any particular type or category of software and identify an open-source solution in that category and a closed-source (proprietary) solution.  Here are a few examples:

    • The software that people choose to do their email stuff.  The email client.  Open-source is Thunderbird, closed-source is Microsoft Outlook.
    • The software that people choose to run the data router in their house.  Open-source is OpenWRT or Tomato or pfSense, closed-source is whatever is inside Verizon’s FIOS router or your Linksys or TP-Link or Asus or Eero or Netgear.
    • The software that people choose for the hardware wallet they entrust their bitcoin to.  Open-source is Trezor, closed-source is Tangem or Ledger.
    • The software that people choose to host their web site.  Open-source is WordPress, closed-source is Squarespace.
    • Messaging.  Open-source is Signal and RCS text messaging if it is end-to-end encrypted (with the little padlock).  Closed-source is SMS text messaging generally, Facebook messaging, iMessage, WeChat, WhatsApp.
    • Social media.  Open-source is Bluesky and Mastodon.  Closed-source is X, TikTok, Instagram.
    • Word processor.  Open-source is Libre Office, closed-source is Microsoft Office.
    • The software that people choose for their smart phone.  Open-source is Android, closed-source is Apple.
    • The software that people choose for their VPN.  Open-source is Wireguard, closed-source is almost all of the others.
    • Software for image editing.  Open-source is Gimp, closed-source is Photoshop.

It is pretty much a settled observation is that no matter what category you pick, the closed-source one has more defects and security flaws in it than the open-source one.  The closed-source one was written by some team of programmers who inevitably made whatever mistakes they made (always a nonzero number of mistakes).    The company employing the programmers keeps the source code secret.  The software has some number of security flaws, some of which are eventually discovered by members of the public.

In contrast the open-source one is “out there” for anyone to inspect.  Many bugs get found and fixed, and it is a quiet process that draws little attention.  Yes, the articles talk about two instances of situations where Mythos supposedly found some software flaw that the public inspection did not find.  Meanwhile whatever the closed-source equivalent is, it has many more bugs in it.  Mythos will eventually find those bugs as well, and there are more bugs to find.

Also not directly addressed in the articles is the problem of backdoors.  History is filled with situations where the vendor of some closed-source product succumbed to pressure from a government to design a backdoor into the software of that product.  Or where the vendor of some closed-source product succumbed to pressure from a government to hold back from using the best encryption algorithm available, instead using a weaker one that the government is able to decode.  A government will make use of the backdoor or the weaker encryption to eavesdrop on whatever is going on.  Inevitably the backdoor or intentional weakness is also eventually found by other governments or by a bad person that is not a government.

But when a system is open-source, it would get found out if a backdoor had been designed into the product.  it would get found out if a weaker encryption had been used instead of the best one.

Also not directly addressed in the article is the problem of the vendor making use of inside access to whatever is going on.  Facebook, X, Instagram all spy on everything the users do.  If Bluesky or Mastodon were to do this, everybody would know.

With a closed-source product, one line of attack that has been around for decades is for human beings to reverse-engineer the executable code that is in the closed-source product, and can work back in the direction of the source code.  The humans can then find bugs and security flaws.  See for example Inside the Model 100.  As part of writing that book, I reverse-engineered the entirety of the closed-source software in the Model 100 computer.  One of the Appendices in that book is a complete set of comments that I wrote, explaining the source code.

Reverse engineering of closed-source code can be done by AIs even more effectively and faster than when humans do it.  It is inevitable that Mythos and other AIs will be asked to reverse-engineer closed-source software and will find bugs and security flaws.  Probably AIs have already been asked to reverse-engineer closed-source software and products and have found bugs and weaknesses.

So the main point of the articles, I think, is “we should be scared that Mythos is out there and it may cause harm”.  I agree with that point.

What would be unfortunate is if a reader of the articles were to conclude “it is better to choose closed-source products than open-source products”.

What has already been happening, and will continue to happen, is that the various open-source product vendors, and the open-source communities, make use of various AIs to look for bugs in the open-source software.  Sunlight is the best disinfectant, that kind of thing.  I think that gradually the various open-source initiatives will creep closer to being bug-free.

In contrast the various closed-source vendors will differ from one to the next in their diligence about making use of the opportunity to find bugs with AI in their closed-source software.

Eventually I expect the practical consequence of “AI looking for bugs in software” will be “fewer bugs in software” and it will happen for both open-source and closed-source software.

And what will remain, after that migration has happened, is we will be back where we started.  There will be open-source software where you can be confident it does not contain backdoors and intentional selections of weak encryption algorithms.  And there will be closed-source products where it is not possible for the users to check for such things.

There is another really good point in the articles, although it is often sort of buried toward the end.

In the old days people wrote software the hard way.  Human brains thinking, and human beings writing the software.  I have written lots of very difficult software over the years.  Many decades ago, in an earlier life, I helped write software that would control a commercial jet (the Lockeed L-1011 jumbo jet) as it lands on an airport runway.  One hopes that we avoided making mistakes in that software that would lead to the airplane crashing.  I believe there was never a crash of an L-1011 during a landing.  Now there are no more L-1011s in service, so if we did make mistakes, that is in the past.

Nowadays what happens more and more is that a person will use some AI shortcut to generate software to do this or that.  Create an app to manage dental records or whatever.  The resulting software will get put into use by users.  And quite literally no human being who is actually experienced with writing software will have been involved in that process.  In particular the person who used the AI did not do the tedious and unpleasant work of learning the programming language involved and learning how to write code in that language, and making mistakes and learning from the mistakes.  And did not arrange to be part of a team with a second pair of eyes to try to catch mistakes.

The app that manages the dental records will eventually be found to have some defect or weakness (or more likely, twenty defects and weaknesses).  But the defect or weakness would likely not have happened (or would likely have been caught and corrected, earlier in the process) if the software had been written the hard way, the old-fashioned way, by human beings drawing upon experience.

It reminds us of the cases we have read about where a lawyer asks an AI to write a legal brief, and the lawyer files the brief in court.  And it turns out that one of the cited cases in the brief does not exist.  The lawyer gets sanctioned.  The client has a weaker position in court or loses the case or gets convicted of a crime.  It would have been better had the lawyer “done it the hard way”, the old-fashioned way, researching the cases and personally drafting the brief.  And maybe arranging to have a second pair of eyes look at the work product before it went out the door.

Anyway, the articles correctly point out that Mythos is going to have a much richer hunting ground to look for security flaws in a world where more and more apps got written by AIs rather than by being written the old-fashioned way, by humans doing hard work.

What are the takeaways for people like you and me?  What I say is, don’t purchase or use consumer products where you have no way of knowing whether the software was written by an AI instead of by humans doing it the hard way.  And products where you have no way of knowing whether it has a backdoor or intentional weakness built in.  And avoid products and software that are not open-source.

Do you have a view about this?  Please post a comment below.

Leave a Reply

Your email address will not be published. Required fields are marked *