Categories
Ace Breaking News

FEATURED BREAKING AI REPORT: ChatGPT Redis Bug Revealed Friday Exposes Personal Data

@acenewsservices

This is our daily post that is shared across Twitter & Telegram and published first on here with Kindness & Love XX on peace-truth.com/

#AceNewsRoom in Kindness & Wisdom provides News & Views @acebreakingnews

Ace Press News From Cutting Room Floor: Published: Mar.28: 2023:

#AceSocialDesk – ChatGPT leaks bits of users’ chat history

OpenAi logo
@acenewsservices

OpenAI Reveals Redis Bug Behind ChatGPT User Data Exposure Incident

OpenAI on Friday disclosed that a bug in the Redis open source library was responsible for the exposure of other users’ personal information and chat titles in the upstart’s ChatGPT service earlier this week.

The glitch, which came to light on March 20, 2023, enabled certain users to view brief descriptions of other users’ conversations from the chat history sidebar, prompting the company to temporarily shut down the chatbot.

“It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time,” the company said.

The bug, it further added, originated in the redis-py library, leading to a scenario where canceled requests could cause connections to be corrupted and return unexpected data from the database cache, in this case, information belonging to an unrelated user.

To make matters worse, the San Francisco-based AI research company said it introduced a server-side change by mistake that led to a surge in request cancellations, thereby upping the error rate.

While the problem has since been addressed, OpenAI noted that the issue may have had more implications elsewhere, potentially revealing payment-related information of 1.2% of the ChatGPT Plus subscribers on March 20 between 1-10 a.m. PT.

This included another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. It emphasized that full credit card numbers were not exposed.

The company said it has reached out to affected users to notify them of the inadvertent leak. It also said it “added redundant checks to ensure the data returned by our Redis cache matches the requesting user.”

OpenAI Fixes Critical Account Takeover Flaw

In another caching-related issue, the company also addressed a critical account takeover vulnerability that could be exploited to seize control of another user’s account, view their chat history, and access billing information without their knowledge.

Are you aware of the risks associated with third-party app access to your company’s SaaS apps? Join our webinar to learn about the types of permissions being granted and how to minimize risk.RESERVE YOUR SEAT

The flaw, which was discovered by security researcher Gal Nagli, bypasses protections put in place by OpenAI on chat.openai[.]com to read a victim’s sensitive data.

ChatGPT Account Takeover
@acenewsservices

This is achieved by first creating a specially crafted link that appends a .CSS resource to the “chat.openai[.]com/api/auth/session/” endpoint and tricking a victim to click on the link, causing the response containing a JSON object with the accessToken string to be cached in Cloudflare’s CDN.

The cached response to the CSS resource (which has the CF-Cache-Status header value set to HIT) is then abused by the attacker to harvest the target’s JSON Web Token (JWT) credentials and take over the account.

Nagli said the bug was fixed by OpenAI within two hours of responsible disclosure, indicative of the severity of the issue.

message saying history is unavailable
@acenewsservices
title of chat says Wife Valentine's Day Gift?
@acenewsservices

New gadgets and software come with new bugs, especially if they’re rushed. We can see this very clearly in the race between tech giants to push large language models (LLMs) like ChatGPT and its competitors out the door. In the most recently revealed LLM bug, ChatGPT allowed some users to see the titles of other users’ conversations.

LLMs are huge deep-neural-networks, which are trained on the input of billions of pages of written material.

In the words of ChatGPT itself:

β€œThe training process involves exposing the model to vast amounts of text data, such as books, articles, and websites. During training, the model adjusts its internal parameters to minimize the difference between the text it generates and the text in the training data. This allows the model to learn patterns and relationships in language, and to generate new text that is similar in style and content to the text it was trained on.”

We have written before about tricking LLMs in to behaving in ways they aren’t supposed to. We call that jailbreaking. And I’d say that’s fine. It’s all part of what could be seen as a beta-testing phase for these complex new tools. And as long as we report the ways in which we are able to exceed the limitations of the model and give the developers a chance to tighten things up, we’re working together to make the models better.

But, when a model spills information about other users we stumble into an area that should have been sealed off already:

To understand better what has happened, it is necessary to have some basic working knowledge about how these models work. To improve the quality of the responses they get, users can organize the conversations they have with the LLM into a type of thread, so that the model, and the user, can look back and see what ground they have covered and what they are working on.

With ChatGPT, each conversation with the chatbot is stored in the user’s chat history bar where it can be revisited later. This gives the user an opportunity to work on several subjects and keep them organized and separate.The history was unavailable for a while

Showing this history to other users would, at the very least, be annoying and unacceptable, because it could be embarrassing or even give away sensitive information.Did I ask ChatGPT what to get my wife for Valentine’s Day?

Nevertheless, this is exactly what happened. At some point, users started noticing items in their history that weren’t their own.

Although OpenAI reassured users that others could not access the actual chats, users were understandably worried about their privacy.

According to an OpenAI spokesperson on Reddit the underlying bug was in an open source library.

post on Reddit by Sam Altman

OpenAI CEO Sam Altman said the company feels “awful”, but the “significant” error has now been fixed.

Things to remember

Giant, interactive LLMs like ChatGPT are still in the early stages of development and, despite what some want us to believe, they are neither the answer to everything nor the end of the world. At this point they are just very limited search engines that rephrase what they found about the subject you asked about, unlike an β€œold-fashioned” search engine that shows you possible sources of information and you can decide which ones are trustworthy and which ones aren’t.

When you are using any of the LLMs, remind yourself that they are still very much in a testing phase. Which means:

  • Do not feed it private or sensitive information about yourself or your employer. Other leaks are likely and may be even more embarrassing.
  • Take the results with more than just a grain of salt. Because the models don’t provide sources of information, you can’t know where it’s ideas came from.
  • Make yourself familiar with the LLM’s limitations. It helps to understand how up to date the information it uses is and the subjects it can’t converse freely about.
OPEN AI
ADDITIONAL INFORMATION
HACKERS NEWS REPORT
&
MALWAREBYTES NEWS REPORT

Editor says …Sterling Publishing & Media Service Agency is not responsible for the content of external site or from any reports, posts or links, and can also be found here on Telegram: https://t.me/acenewsdaily and thanks for following as always appreciate every like, reblog or retweet and comment thank you

@peacewriter51 This world has gone crazy