AT&T Develops Safe ChatGPT-Based Tool with Microsoft for Employee Use
Andy Markus, AT&T’s chief data officer, wrote in a blog post on Tuesday that the company had worked with Microsoft, a big partner in OpenAI, to make the tool safe.
Concerns have been raised that coders could lose out as generative AI grows quickly, but AT&T says its tool is being used to help its coders and software writers with their work and to translate customer and employee documentation.
AT&T is also looking into use cases like updating old software code, helping workers find answers to some HR questions, and streamlining some of its customer services.
Markus wrote in the blog post, “We are very optimistic about the present and future of AI.” “We think it will help businesses better serve their customers, make new products and services possible that weren’t before, and make our employees more productive and creative.”
According to the blog post, AT&T and Microsoft collaborated to make sure the application was secure for corporate data, which had been a concern for businesses with ChatGPT. According to the blog post, Ask AT&T has been “pressure tested for leakage” to ensure that confidential information does not leak into the public sphere.
After the launch of ChatGPT caused a whirlwind of investor excitement, businesses have been gingerly integrating AI technology into their workflows. However, AI-powered technologies are not without flaws, and several businesses have experienced problems after integrating them too soon into daily operations.
In the blog post, AT&T stated that generative AI tools are “not magic or infallible” and that it is ultimately the responsibility of the staff to ensure that the tool’s output is “accurate and appropriate.”