Last year, Andres Freund, a Microsoft engineer, spotted a backdoor in xz Utils, an open source data compression utility that is found on nearly all versions of GNU/Linux and Unix-like operating sys…
there’s already a whole swathe of static analysis tools that are used for these purposes (e.g. Sonarqube, GH code scanning). of course their viability and costs affect who can and does utilise them. whether or not they utilise LLMs I do not know (but I’m guessing probably yes).
Not just a problem for open source, surely? The answer is to use AI to scan contributions for suspicious patterns, no?
And then when those AI also have issues do we use the AI to check the AI for the AI?
Its turtles all the way down.
there’s already a whole swathe of static analysis tools that are used for these purposes (e.g. Sonarqube, GH code scanning). of course their viability and costs affect who can and does utilise them. whether or not they utilise LLMs I do not know (but I’m guessing probably yes).