Artificial intelligence is undoubtedly a field that is advancing every day, it is true that the more it advances, the more facilities it gives to the general public, but this progress is not entirely positive because, if we talk about cybersecurity, this area is particularly affected by the automations that are emerging as a result of the wave of ChatBots that are being created from ChatGPT.
Why artificial intelligence is a cybersecurity risk
Let’s understand that this (at least for now) is not about artificial intelligence thinking more than us and trying to subdue us, it is about something simpler, the fact that so many tasks are being automated with a technology that is not yet well developed, causes the results to be mediocre and, especially in the security and reliability of these.
Since we are not talking about asking, for example, ChatGPT to generate a text and the reliability of this is not guaranteed (which is also the case), but we are talking in the context of users who delegate almost all of their development or programming tasks to this type of artificial intelligence which, as we mentioned, are not, at least at this point in history, fully developed in their entirety, causing the codes generated in them to contain security flaws.
These security flaws can have repercussions especially for users who rely on low-cost “programmers” and “developers” who can provide them with the code they need, for example, to add a feature to their website, but of course, this code would be generated by an AI such as ChatGPT, thus causing code to be sold through a service that is insecure, also causing new vulnerabilities in the information assets where this code is embedded.

How to protect yourself from the risks of AI generated code
First and foremost, do not use code generated by artificial intelligences, there is much talk about the fear that users have of being replaced by an AI in their jobs, if we talk about jobs such as programming, development and even cybersecurity, at this point in history that will not happen yet, because currently there is no artificial intelligence for home or professional use capable of generating code that is following a strict policy of secure development, this does not mean that all code generated by AI is vulnerable, it means that the greater the complexity, the greater the likelihood.
In fact, ChatGPT itself warns you that the code generated through it, may contain security flaws because it does not follow any secure development policy and, as such, was not created for it, as yes, there are several artificial intelligences created solely for the purpose of generating code but, as I say, all this is still just beginning and, it is best, if you want to ensure the security of your code, at least at this point in history, do it by hand and if anything, help you with some AI, but that the main work is done by you and your beautiful mind.