Being able to write and test code with the push of a button using generative artificial intelligence (AI) models like GitHub CoPilot and ChatGPT sounds like too good a deal. So good, in fact, that there must be pitfalls.
Software professionals embrace AI as a powerful tool for building, launching, and updating applications, but are nervous about its intellectual property and security implications. Is the AI-generated code scraped from someone else’s intellectual property? Does this model make use of internal company data that needs to be kept safe?
Related article: How to code using ChatGPT
According to GitLab, technologists recognize that AI adoption requires attention to rights, privacy, security, productivity, and training. investigation Conducted in June with 1,001 developers and executives.
The majority of respondents (79%) expressed concern about AI tools accessing personal information and intellectual property. The main concern was the potential exposure of sensitive information such as customer data.
Topping the list of concerns about using AI-generated code are copyright concerns. Nearly half (48%) of respondents expressed concern that code generated using AI might not be subject to the same copyright protection as human-generated code. An additional 39% were concerned about security vulnerabilities from such code.
Also: 6 Skills Needed to Become an AI Prompt Engineer
Still, engineers are optimistic that these problems can be solved and continue to make progress. As many as 90% of respondents whose organizations currently use AI in software development are confident in using AI in their daily work at work. Additionally, 60% said they use AI every day, and 22% said they use it several times a week. More than half (51%) rated their organizations’ efforts to incorporate AI into their software development lifecycle as “very” or “extremely” successful.
AI is seen as an important investment from a software development perspective. Of those respondents whose organizations are using or plan to use AI in the future, 83% said they have or plan to allocate specific budgets to AI for software development. Benefits cited include increased efficiency (55%), reduced cycle time (44%), and increased innovation (41%).
Training and skills also emerged as common themes of barriers and concerns identified by respondents. 81% say more training is needed to use AI in the workplace, and 87% say companies need to reskill their workforce to adapt to the changes AI brings Did. The top concern is the potential for introducing new skill sets to learn (42%), followed by the lack of adequate skill sets for using AI and interpreting AI output (34%). %) was.
Related article: 5 Ways to Consider Using Generative AI in the Workplace
The bottom line is that AI cannot replace human oversight and innovation. More experienced professionals say, “Although we embrace AI as a tool to assist in skill development, we believe AI cannot fully replace the expertise, knowledge, and problem-solving of experienced professionals like themselves.” is not considered,” the study authors argue.
“At the end of the day, it’s more than just humans versus machines. Leveraging the experience of human team members alongside AI is the best way for organizations to fully address their security and intellectual property concerns, and It’s probably the only way.”
AI may be able to generate code faster than human developers, but “human team members must ensure that AI-generated code is free of errors, security vulnerabilities, and copyright issues. It needs to be verified before,” they said.