Connect with us

Tech

Meta Lawsuit – Florida Attorney General Challenges Social Media Impact on Minors

Published

on

Meta Lawsuit - Florida Attorney General Challenges Social Media Impact on Minors

The digital age has brought about many conveniences and innovations, but it has also raised concerns about the well-being of minors in an increasingly connected world. Florida Attorney General Ashley Moody has recently taken a bold step by filing a federal lawsuit against Meta, the parent company of Facebook and Instagram, alleging that these platforms employ “manipulative” features that keep minors hooked. This lawsuit, known as the “Meta Lawsuit,” is part of a larger nationwide effort to address the effects of social media on the mental health and development of young individuals.

Unpacking the Meta Lawsuit

Moody’s lawsuit, which was filed in a U.S. district court in Tampa, presents a comprehensive array of allegations against Meta. It contends that the company’s platforms cause “serious harm to children, parents, and the community at large” by utilizing algorithms and features intentionally designed to maximize the time minors spend on these social media apps.

One of the key grievances raised in the lawsuit is related to the controversial “infinite scroll” design and auto-play features found on these platforms. According to Moody’s office, these features make it exceptionally challenging for young users to disengage, as there is no natural endpoint for the display of new information. This perpetual scrolling keeps users engaged, leading to increased exposure to advertisements.

The Impact on Mental Health

The lawsuit also alleges that Meta has deceptively downplayed the negative impacts on the mental health of teenagers and young individuals. It references a U.S. surgeon general’s advisory titled “Social Media and Youth Mental Health,” which highlights the risks faced by young individuals exposed to social media for more than three hours a day. Such individuals are twice as likely to experience symptoms of depression and anxiety.

Ineffective Age Gating

Another critical aspect of the lawsuit is the claim that Meta employs “ineffective age gating” practices on its platforms. This means that users younger than 13 can create and use social media accounts, which is in violation of the federal Children’s Online Privacy Protection Act. This raises concerns about the online safety of minors and their exposure to potentially harmful content.

Also Read: Microsoft’s Purchase of Activision: Triumph in Regulatory Maze

Meta’s Defense

In response to these allegations, Meta has stressed its commitment to providing a safe and positive online experience for teenagers and their families. The company has pointed to its terms of service, which prohibit users under 13 from using Instagram, and its efforts to restrict ads targeting teenagers. Meta also argues that research on the negative impact of social media on teenagers’ mental health is not yet conclusive and highlights the potential positive impacts that these platforms can have on the lives of young people.

Conclusion

The “Meta Lawsuit” filed by Florida Attorney General Ashley Moody is a significant legal action that underscores the growing concerns about the influence of social media on the well-being of minors. This lawsuit encompasses a wide range of issues, from manipulative design features to the downplaying of mental health impacts. As the case progresses, it will undoubtedly provoke important discussions about the responsibility of tech companies in safeguarding the mental and emotional health of their young users and the need for clearer regulations in this digital age. It serves as a critical reminder of the complexities surrounding the positive and negative effects of technology on today’s youth.

Sahil Sachdeva is an International award-winning serial entrepreneur and founder of Level Up PR. With an unmatched reputation in the PR industry, Sahil builds elite personal brands by securing placements in top-tier press, podcasts, and TV to increase brand exposure, revenue growth, and talent retention. His charismatic and results-driven approach has made him a go-to expert for businesses looking to take their branding to the next level.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Microsoft’s Latest AI Venture Takes Flight in the Middle East

Published

on

gettyimages-2040799653

Microsoft is making a strategic move in the field of artificial intelligence (AI) by investing $1.5 billion in Abu Dhabi’s G42, an AI group that has recently come under scrutiny for its ties to China. This collaboration between the two companies marks Microsoft’s foray into the Middle Eastern AI landscape for the first time, with plans to focus on AI development and digital infrastructure.

Led by Peng Xiao, a Chinese businessman and former CEO of Pegasus, a cybersecurity firm, G42 has faced questions regarding its connections to Beijing. Concerns have been raised, particularly by US officials, regarding the potential for G42 to facilitate the sharing of American technology and data with the Chinese government. However, Xiao has refuted these claims, dismissing them as “misinformation” in a recent interview with CNN.

Despite these concerns, both G42 and Microsoft have emphasized their commitment to adhering to US and international trade regulations as part of their partnership agreement. Microsoft President Brad Smith will even join the G42 board, signaling a deeper collaboration between the two entities.

One of the notable outcomes of this partnership is the development of an Arabic-language AI model named “Jais,” unveiled by G42 last year and hosted on Microsoft’s Azure platform. This initiative underscores the potential for AI to address linguistic and cultural diversity in technology.

Microsoft’s investment in G42 is part of a broader strategy to establish itself as a frontrunner in the AI sector. The company has already formed significant partnerships, including with OpenAI, contributing to its growth in recent years. However, these partnerships have drawn attention from regulators in the United States and Europe, who are wary of Microsoft’s expanding influence in the AI domain.

Beyond the Middle East, Microsoft has been actively pursuing AI investments worldwide. In February, it announced a partnership with Mistral, a leading French AI startup, and committed substantial funding to AI projects in Spain and Germany. This global outreach reflects Microsoft’s vision of ushering in a new era of AI-driven innovation across industries.

As Brad Smith remarked in a recent interview, “It’s all about this new AI era.” With Microsoft’s latest investment in G42 and its ongoing initiatives, the company is poised to play a significant role in shaping the future of AI on a global scale.

Continue Reading

Tech

Legislators Introduce Extensive Initiative for Broadening Online Privacy Safeguards

Published

on

imrs (3)

In a significant move towards enhancing digital privacy, lawmakers have unveiled a comprehensive plan aimed at expanding protections for online users. The initiative, introduced by legislators, outlines a sweeping framework designed to bolster privacy safeguards across various digital platforms.

The proposed measures encompass a wide array of online activities, addressing concerns surrounding data collection, tracking, and user consent. Among the key provisions are stricter regulations on how tech companies handle user data, increased transparency requirements, and provisions for stronger user control over personal information.

Under the proposed plan, online platforms would be required to provide clear and accessible information about their data practices, including details on what information is collected, how it is used, and with whom it is shared. Additionally, users would have greater control over their privacy settings, with options to limit data collection and tracking.

The initiative also aims to strengthen enforcement mechanisms, empowering regulatory agencies to hold tech companies accountable for violations of user privacy rights. Penalties for non-compliance could include hefty fines and other punitive measures to ensure adherence to the new regulations.

Furthermore, the proposed plan includes provisions for greater collaboration between industry stakeholders, policymakers, and advocacy groups to foster dialogue and consensus on privacy-related issues. This collaborative approach seeks to balance the need for privacy protection with the innovation and competitiveness of the digital economy.

The unveiling of this extensive initiative represents a significant step forward in addressing growing concerns about online privacy and data protection. By establishing a robust framework for safeguarding user privacy rights, lawmakers aim to create a safer and more transparent digital environment for all.

As the legislative process unfolds, stakeholders from across the tech industry, civil society, and government will closely monitor developments, anticipating the potential impact of these proposed reforms on the digital landscape. With privacy concerns continuing to dominate public discourse, the need for effective and comprehensive privacy protections has never been more pressing.

 

 

Continue Reading

Tech

FTC looks at TikTok’s security and privacy practices

Published

on

gettyimages-2095060036

Under the condition of anonymity, the Federal Trade Commission is looking into TikTok’s data and security policies.

For the social media network, which is already in danger of a possible US ban or being forced to separate from its Chinese parent firm, the investigation is just one more hurdle.

According to the sources, TikTok is being investigated by the FTC for allegedly breaking the Children’s Online Privacy Protection rule, which mandates that businesses get parental permission before collecting any data from children under the age of 13.

According to the sources, the agency is also looking into whether TikTok broke the FTC Act by refusing to allow users’ data to be accessed by those in China. This provision of the Act forbids “unfair or deceptive” business activities.

One of the sources claims that in the upcoming weeks, the FTC may file a lawsuit against TikTok or reach a settlement with the business. Politico first broke the story of the investigation.

“No comment,” was the response given by FTC Director of Public Affairs Douglas Farrar when questioned about the probe.

An instant comment was not received from TikTok.

The US existential danger to TikTok is the reason behind the FTC investigation. A bipartisan group in the US House of Representatives voted earlier this month to enact legislation mandating that ByteDance sell TikTok or else it would be removed from US app stores. President Joe Biden has stated he would sign the law if it reaches his desk, and it is currently before the Senate. However, Senate leaders have stated that they are proceeding methodically, which may cause delays or bring the House bill to an end.

The Chinese corporation ByteDance, which owns the short-form video company, has refuted claims that its app risks US citizens’ national security.

According to TikTok, which is not present in China, the Chinese government has never accessed user data from Americans.

Cybersecurity experts claim that ByteDance is obligated by Chinese law to comply with the nation’s intelligence requests. This could potentially jeopardize US user data, as ByteDance owns TikTok. To solve that problem, TikTok has implemented internal protocols that restrict access by non-US workers and moved its user data from US users onto cloud servers run by US tech giant Oracle.

After BuzzFeed News revealed in 2022 that ByteDance employees had obtained US user data many times, TikTok admitted to Congress in 2022 that staff based in China could access such data. In his initial testimony before Congress last year, TikTok CEO Shou Chew confessed that a “misguided attempt” to find leakers within the company led to the firing of numerous ByteDance workers for spying on specific US journalists.

 

Continue Reading

Tech

Tesla reports production problems, which causes deliveries to fall short of expectations

Published

on

Tesla’s first-quarter car deliveries fell precipitously, marking a dismal start to the year for a business plagued by issues with the market and reputation.

The delivery figures released on Tuesday came amid a sluggish market for electric cars, rising borrowing costs, a slew of litigation targeting Tesla’s patents, and a scandal involving the company’s CEO, Elon Musk. In a January earnings call, Musk issued a warning, stating that Tesla would expand at a “notably lower rate” this year while it makes investments in a next-generation car that it intends to begin producing in 2025.

According to Tesla, it delivered 387,000 cars to consumers in the first quarter, a 20% decrease from the previous quarter and a more than 8% decrease from the same period last year.

According to Wedbush Securities analyst Dan Ives, prior to Tuesday’s news, Wall Street analysts had predicted that Tesla would post 443,000 deliveries for the quarter. On Tuesday, shares of Tesla dropped 4.9%.

The business attributed the delay, at least in part, on the move to early production of the upcoming Model 3 car, problems with shipping across the Red Sea, and possible arson at its Berlin plant.

For Tesla’s “ugly delivery number,” Deepwater Asset Management analyst Gene Munster cited the overall state of the economy as well as a decline in EV confidence. Munster tweeted that financing more expensive electric cars has become more costly because of rising interest rates. He also noted that “the excitement around [electric vehicles] has cooled, which further dampens sales.”

However, he added that Tesla remains “on the right track.”

Ives compared the first quarter of the business to “a train wreck into a brick wall.” According to Ives, Musk must now design a turnaround as the business pushes forward with its next car.

“Let’s face it: This was an absolute disaster of a first quarter that is difficult to explain away, even though we were expecting a bad one,” Ives stated. “We see this as a turning point in the Tesla narrative where Musk has the opportunity to either reverse the black eye 1Q performance or turn things around. If not, it appears that there may be some gloomier times ahead, which might upend the Tesla story in the long run.

The electric vehicle manufacturer, whose shares fell more than 20 percent in the first quarter, dropped prices throughout 2023 to keep up with demand, but analysts said such reductions were insufficient to get over the challenges it faced in the first quarter of 2023.

Karl Brauer, an executive analyst with the auto research firm ISeeCars.com, described it as “death by 1,000 cuts.” Although Musk “has never had a demand problem,” there have been more signs in the last year or so that he is making more cars than the market is willing to buy.

Tesla reported that it produced 433,000 cars in the first quarter, which is 46,000 more than it shipped.

Other broader market forces are working for Tesla. Although the United States continues to see greater growth in sales of electric cars than gasoline cars, interest in these vehicles has recently begun to decline due to a lack of infrastructure for charging, among other reasons. Some automakers, like Mercedes-Benz, have lowered their short-term electrification goals or postponed them.

However, BYD, a Chinese manufacturer of electric vehicles, surpassed Tesla in terms of quarterly sales of electric vehicles last year.

The company’s issues are exacerbated by Tesla’s declining sales figures. Regulators are also paying it more attention because of its driver-assistance program, Autopilot. Nearly every automobile the business has ever made was included in the 2 million vehicles that were recalled last year due to worries that the technology had sufficient safeguards to prevent driver abuse. The National Highway Traffic Safety Administration’s extensive examination of the technology led to the recall, which was carried out via remote update.

The Washington Post released an investigation a few days before the recall announcement, which showed that Autopilot was involved in at least eight fatal or seriously injured crashes in places where the software was not supposed to be employed.

cases concerning the company’s Autopilot software are also being filed. These cases seek to determine whether the software should share any of the blame for malfunctions in vehicles driven by the driver or whether the driver bears the entire responsibility. This month, a jury will decide whether to try Tesla for wrongful death. The case concerns a 2018 Tesla on Autopilot that crashed into a median on Highway 101 in Northern California while the driver was reportedly not paying attention.

Thus far, the business has been successful in avoiding liability: in a lawsuit concerning Autopilot’s purported involvement in a fatal incident in Riverside, California, a jury last year held Tesla not guilty.

Munster of Deepwater Asset stated before Tuesday’s announcement that neither Musk nor investors seemed to be affected by Tesla’s legal troubles. To further solidify his support for Full Self-Driving, Tesla’s top driver-assistance system, Musk mandated last month that staff members install and demonstrate the most recent version to clients before closing a deal.

In an email to his employees that was initially obtained, Elon Musk stated, “Going forward, it is mandatory in North America to install and activate FSD V12.3.1 and take customers on a short test ride before handing over the car.” The effectiveness of (supervised) FSD is almost unknown. Although I am aware that this may delay delivery, it is still an unreasonable condition.

According to a poll conducted by market research firm Caliber and sent to Reuters, Tesla’s “consideration score” dropped to 31% in February from a peak of 70% in November 2021, when the company began monitoring consumer interest in the brand. A portion of the study referenced Musk’s contentious background. One of the richest persons in the world, Elon Musk, has courted controversy in the last year by endorsing strict immigration policies, encouraging antisemitic discourse, pushing conspiracy theories, and denouncing liberal causes as a “woke mind virus.”

His divisive remarks have turned off advertisers and users from his owned social media network, X, which was once known as Twitter.

Musk claims that Tesla is “between two major growth waves” and that the company’s current sales problems are just the result of economic cycles.

Regarding Tesla’s poor sales, Brauer stated that the company’s legal troubles and Musk’s demeanor aren’t the main causes of the drops. However, it “certainly isn’t helping,” he declared.

He claimed that “those factors are only leading to all the challenges.”

An inquiry for comment from Tesla was not answered.

Continue Reading

Tech

Google to Purge Collected Data from ‘Private’ Web Browsers: Privacy Concerns Spark Debate

Published

on

imrs (2)

 

In a court filing on Monday, attorneys for customers who sued the internet behemoth Google agreed to remove information that the company had gathered about users’ internet activities. At the same time, they used its “incognito” private browsing option.

Additionally, Google pledged to keep the modifications made to the Chrome browser’s incognito mode, preventing users from tracking “cookies” for advertising purposes and revealing the precise information it keeps about them. Individual customers still have the option to sue Google for the tracking, but unlike other recent tech lawsuit settlements, this one does not specify how much Google must pay to anyone who was harmed by its activities. 

Plaintiffs’ attorneys predict that Google might lose billions as a result of this, but that would need thousands of people to file lawsuits against the business.

David Boies, the chairman of the legal firm that spearheaded the complaint, Boies Schiller Flexner, said in an email that “this settlement is a historic step in requiring honesty and accountability from dominant technology companies.”

“We are happy to have settled for this litigation, which we always thought had no merit. Google spokesman José Castañeda stated, “The plaintiffs originally wanted $5 billion and are receiving zero.” “We are pleased to remove outdated technical data that was never used for personalization purposes and was never linked to an individual.”

The arrangement follows Google’s December legal settlement, which prevented a potentially well-publicized trial. Concerns about how large internet corporations utilize their customers’ data have led to an increase in the number of significant legal and regulatory issues that Google is facing both domestically and internationally. After Epic Games filed a lawsuit against Google earlier this year, a judge determined that the business had violated competition laws in the way it operated its Android app store. This was a significant setback for Google.The business has gradually started to settle cases in place of fighting class-action and government litigation for years over its data collection tactics.

Continue Reading

Tech

Baltimore’s Bridge collapses: Vehicles in water after ship strikes bridge

Published

on

BRIDGE BALTIMORE

A big container ship impacted a portion of the Francis Scott Key Bridge in Baltimore this morning, causing it to collapse. Deaths were anticipated.

BALTIMORE − The Francis Scott Key Bridge − collapsed early Tuesday after it was struck by a large cargo ship, prompting a massive emergency response for at least seven people in the water.

The Baltimore City Fire Department described the collapse as a mass-casualty incident. “We received several 911 calls at around 1:30 a.m., that a vessel struck the Key Bridge in Baltimore, causing the collapse,” Kevin Cartwright, director of communications for the Baltimore Fire Department, told Reuters. “This is currently a mass casualty incident and we are searching for seven people who are in the river.”

Two persons were pulled out of the water, according to Baltimore City Fire Department Chief James Wallace during a press conference. He added that seven more are thought to be in the water, adding that one was unhurt and the other is in serious condition, but that the number is “subject to change.”

According to the Associated Press, the ship caught fire and multiple vehicles fell into the river below. “The situation is dire,” Cartwright informed the AP. “At this time, our main priority is attempting to recover and save these people.”

On X, Baltimore Mayor Brendon Scott stated that he was on his way to the bridge and that he was aware of the event. “Emergency personnel are on scene, and efforts are underway,” he stated.

The governor of the state of Maryland, Wes Moore, has issued an order for an emergency and is coordinating with an interagency team to expeditiously allocate federal resources from the Biden Administration.

 

Continue Reading

Tech

Elon Musk releases chatbot code in the most recent escalation of the AI war

Published

on

00musk-grok-hjkq-superJumbo

 

On Sunday, Elon Musk, one of the richest men in the world, escalated his fight for control over artificial intelligence by disclosing the source code for his version of a chatbot.

A creation of xAI, the business Mr. Musk created last year, Grok is meant to respond to questions with a tongue-in-cheek tone reminiscent of the science fiction book “The Hitchhiker’s Guide to the Galaxy.” Despite being separate from X, xAI’s technology has been included in the social media network and is taught using user postings. Those with access to X’s premium features can inquire about Grok and get answers.

Through a practice known as “open sourcing,” which allows anybody to access and use the code, Elon Musk entered a contentious discussion within the artificial intelligence community about whether or not this makes the technology safer overall.

Although he hasn’t updated it since, Mr. Musk, a self-described supporter of open source, did the same thing with X’s recommendation system last year.

Although there is still work to be done, Mr. Musk wrote on Sunday in response to a comment about open-sourcing X’s recommendation algorithm, “This platform is already by far the most transparent & truth-seeking (not a high bar, I know).”

The switch to open-source chatbot technology is the most recent exchange of blows between Mr. Musk and OpenAI, the company that created ChatGPT and was recently sued by the volatile entrepreneur for violating its pledge to follow suit. After leaving OpenAI a few years after its founding, Mr. Musk made the case that Microsoft, Google, and other digital behemoths like them shouldn’t have complete control over such a significant technology. Microsoft is a close collaborator of OpenAI.

According to OpenAI, it will try to have the lawsuit dismissed.

Since the technology’s rise in popularity last year, there has been much debate about whether or not to make generative artificial intelligence (A.I.) open source. This technology can produce realistic images and videos as well as human-like text responses. The question of whether the coding that powers artificial intelligence should be made public is a contentious one in Silicon Valley. While some engineers contend that the technology is too powerful to be left unchecked, others maintain that there are more advantages to openness than disadvantages.

Mr. Musk solidified his position in the latter group by disclosing his A.I. code; this move may allow him to outpace rivals who have advanced the technology more quickly.

When the code is made public, other businesses and independent software developers will be able to use and adapt it to create their own chatbots and other artificial intelligence systems. Facebook and Instagram’s parent company, Meta, has also made its LLaMA artificial intelligence technology publicly available. Open sourcing has also been used by Google and Mistral, a well-known French start-up.

As the CEO of Tesla and the owner of X and SpaceX, Mr. Musk established xAI last year with the goal of helping people “understand reality.” He stated in November that a quarter of xAI would be owned by investors in his $44 billion take-private agreement for X.

Mr. Musk has declared that chatbots should be able to handle any topic, branding as “woke” businesses that control their technology to steer clear of controversy.

In a statement published on Friday, Mr. Musk stated, “If an AI is programmed to push for diversity at all costs, as Google Gemini was, then it will do whatever it can to cause that outcome, potentially even killing people.”

Nonetheless, there is a strong commercial component to at least some of the rhetoric around open source. With the most potent and possibly most well-liked chatbot on the market, OpenAI leads the competition and has no incentive to make its code publicly available.

On the other side, Mr. Musk and xAI are attempting to catch up and may help level the playing field by making their code open source and encouraging others to further the technology.

Arizona State University computer science professor Subbarao Kambhampati has maintained that the safest course of action for current A.I. technology is to make it open source. However, he went on to say that for that reason, businesses like Meta and xAI weren’t necessarily making the technology open-source.

The main artificial intelligence scientists at Meta, Elon Musk, and Yann LeCun, he argued, “are not the best messengers for this argument.”

Continue Reading

Tech

Sam Altman Rejoins OpenAI’s Board and Takes Control of the Company

Published

on

08openai-lqch-superJumbo

The inquiry into Sam Altman’s dramatic termination from OpenAI more than three months ago has come to a close. This is a major win for the prominent CEO as he attempts to take back control of the AI startup he helped build.

In a press conference on Friday, OpenAI stated that Mr. Altman, who rejoined the business only five days after being fired in November, had done nothing to warrant his dismissal and would be able to reclaim the one position on the board of directors that remained unclaimed by him.

Silicon Valley was taken aback by Mr. Altman’s dismissal, which also threatened the survival of one of the most significant startups in the IT sector. It also questioned whether OpenAI was prepared to lead the tech industry’s fervent focus on artificial intelligence, with or without Mr. Altman at the helm.

Mr. Altman agreed to an inquiry into the board’s activities and his conduct when he returned to OpenAI in November, but he was not given back his board position. The two members who voted to remove him also decided to resign; their non-company replacements led the WilmerHale law firm’s probe. The much-awaited investigation regarding the occurrence was completed, according to OpenAI board chairman Bret Taylor, however, the report was not made public by the business.

The legal firm’s assessment, according to the corporation, concluded that while the OpenAI board had the right to fire Mr. Altman, his actions did not require his dismissal.

Mr. Taylor mentioned Greg Brockman, the company president who resigned in protest after Mr. Altman was fired, saying, “The special committee recommended and the full board expressed their full confidence in Mr. Altman and Mr. Brockman.” “We are enthusiastic and fully behind Sam and Greg.”

In response to complaints regarding a lack of diversity on the board, OpenAI also added three women to the board: Fidji Simo, the CEO of Instacart; Sue Desmond-Hellmann, the former CEO of the Bill & Melinda Gates Foundation; and Nicole Seligman, the former general counsel of Sony.

One of the replacements named to the OpenAI board in November, Mr. Taylor, predicted that the board will keep growing.

The goal of the report and the new board members was for OpenAI’s management to put the turmoil surrounding Mr. Altman’s dismissal behind them. Numerous concerns concerning his leadership and the peculiar structure of the San Francisco company—a nonprofit board supervising a for-profit business—were raised by the occurrence.

However, OpenAI has left a lot of issues about the firm unanswered because it has not released the study. Insiders have questioned if Mr. Altman had an excessive amount of control over the conduct of the probe.

The two OpenAI board members who departed late last year, Helen Toner and Tasha McCauley, issued a statement saying, “As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.” “We trust that the new board will effectively oversee OpenAI and ensure that it stays true to its goals.”

At the Friday press conference, Mr. Taylor made an appearance with Mr. Altman. He said the study concluded that the previous board removed Mr. Altman in good faith, but it did not foresee the legal problems that would follow his termination. This was followed by the announcement of the new board members.

According to the review, the board’s choice was not motivated by worries about the security or safety of the product, Mr. Taylor stated. “It was just a lack of trust between Mr. Altman and the board.”

Following Mr. Taylor’s prepared remarks, Mr. Altman commended the company’s and its partners’ tenacity both during and following his dismissal. He remarked, “I’m glad this whole thing is over.”

A six-paragraph summary of the report was made available by OpenAI. According to the report, WilmerHale interviewed numerous people, including former board members of OpenAI, and examined 30,000 documents.

It concluded that the prior board’s justification and public justification for Mr. Altman’s termination—that he was not “consistently candid in his communications with the board”—were accurate. Additionally, it stated that the board had not expected its actions to cause instability within the corporation.

WilmerHale, according to the firm, briefed Mr. Taylor and Lawrence H. Summers, the former Treasury secretary who was also named to the board in November, orally about the study, which will not be made public.

According to Mr. Taylor, OpenAI has implemented several measures to enhance the way the business is managed, such as new board governance standards, a conflict of interest policy, and a whistleblower hotline.

The report summary from OpenAI failed to address the concerns raised by the company’s senior executives regarding Mr. Altman with the previous board. The chief technical officer of OpenAI, Mira Murati, and chief scientist Ilya Sutskever had concerns about Mr. Altman’s management style before his termination, citing what they described as his manipulative past.

Through an attorney, Dr. Sutskever has referred to the assertions as “false.” In a Thursday Slack message, Ms. Murati stated that she had given the board the same input that she had given Mr. Altman personally, but she had never contacted the board to voice those concerns.

“I am glad the independent review is over and we can all go forward together,” Ms. Murati wrote on X, the platform that was formerly known as Twitter, on Friday.

The Securities and Exchange Commission is still looking into OpenAI over the board’s conduct and the potential for Mr. Altman to have deceived investors. When a report is finished, companies that use outside legal firms frequently give it to public investigators.

The board spokesperson for OpenAI declined to comment on whether the report would be sent to the S.E.C.

In its most recent funding round, OpenAI, which was valued at over $80 billion, is at the forefront of generative A.I., or technology that can produce text, images, and sounds. Many think that the technology industry could see a similar profound transformation from generative AI as that of the web browser approximately thirty years ago. Some fear that technology could hurt society, contributing to the spread of false information online, eliminating a great number of employment, and possibly endangering humankind.

Mr. Altman embodied the industry’s drive toward generative artificial intelligence (AI) following the release of ChatGPT, an online chatbot by OpenAI in late 2022. Approximately a year later, the board abruptly fired him, stating that it no longer trusted him to lead the business.

Three founders and three independent members made up the remaining six members of the board. One of OpenAI’s founders, Dr. Sutskever, voted with the other three outsiders to remove Mr Altman from his positions as chairman and CEO, citing, without elaborating, his lack of “consistent candidness in his communications.”

Another founder, Mr. Brockman, left the company in disapproval. A few days later, Dr. Sutskever said that he had changed his mind about dismissing Mr. Altman and essentially resigned from the board, leaving Mr. Altman opposed by three independent members.

In 2015, OpenAI was established as a nonprofit organization. Three years later, Mr. Altman established a for-profit subsidiary and secured $1 billion from Microsoft. The nonprofit’s board, whose declared goal was to develop artificial intelligence for the good of humanity, kept total authority over the new division. Microsoft and other investors were not legally able to choose the company’s management.

Mr. Taylor, a former Salesforce executive, was chosen to take the position of two board members in an attempt to calm the chaos and get Mr. Altman back to the company. However, Mr. Altman did not get back on the board. In charge of managing the inquiry into Mr. Altman’s termination were Mr. Taylor and Mr. Summers.

Dee Templeton, vice president of technology and research partnerships at Microsoft, a key collaborator of OpenAI, holds a seat on the board as an observer. Microsoft refrained from commenting on the board and report on Friday.

Corporate governance experts criticized the new board for its lack of diversity. In November, Mr. Taylor stated to The Times that he would appoint “qualified, diverse candidates” to the board, candidates who represented “the fullness of what this mission represents, which is going to span technology, A.I. safety policy.”

 

Continue Reading

Tech

3 Side Initiatives For ChatGPT By 2024

Published

on

what-is-chatgpt-6393027101b3c-sej-1280x720

Side work as alternative or supplemental employment is nowhere near decline, contributing an astounding $1.27 trillion to the U.S. economy and accounting for an estimated 38% of the workforce (a figure that is set to rise as flexible working becomes the norm and younger generations enter the workforce). In fact, with the introduction of ChatGPT, they are about to explode.

According to projections from Statista experts, almost 86 million American workers will be self-employed by 2027, accounting for 50.9% of the country’s employment. This is consistent with the steady increase rate that we have seen over time. That statistic, however, may not accurately reflect the number of side gigs being pursued by professionals globally, as ChatGPT usage is rising in tandem with this trend.

ChatGPT is beneficial for anyone thinking about working for themselves since it lowers the initial entrance hurdles of time, money, and intelligence.

Technology is important to remember that, although being created by humans, technology lacks human common sense and is subject to false information, factual errors, and what are known as “hallucinations.” Because of this, it’s imperative that you already possess in-depth knowledge of the industry you want to make money from as a side gig. This will enable you to quickly identify any errors the chatbot may have made and to alter and personalize the results it produces with your unique inputs, skills, and style.

In this sense, ChatGPT is not your work; it is merely the framework. Your clients will be deceived if you use ChatGPT exclusively for work-related purposes, as they can obtain the same service by using it themselves. Thus, take care to make sure that the procedure is made as human as possible.

Keeping that in mind, here are three simple methods to profit from ChatGPT’s features:

1. Composing electronic books

ChatGPT is an excellent application for fast-authoring e-books that you can sell on websites like Amazon. Creating a series of more focused prompts for each section of your book can help you obtain the high-quality information you require for your e-book. Don’t give ChatGPT a general question like “Write a book about how to start software engineering.”

2. Class Schedule

Furthermore, ChatGPT is an invaluable resource for educators and tutors. It can be instructed to produce worksheets, tests, exercises, and other educational materials based on the subjects you give it or use to train it on. It might help you come up with innovative ideas for creating and organizing classes for your tutoring side business. Subsequently, you might earn money by selling other educators your lesson plans and lesson plan templates on online marketplaces such as Teachers Pay Teachers.

3. An anonymous YouTube channel

Did you know that you can build a successful YouTube channel using your passion or area of expertise and earn passive money without a camera? You don’t even need to worry about being camera shy because AI can still be used to start an educational channel.

Choose your channel’s theme and primary emphasis topic first. What are the people you are trying to get to your channel, what are their urgent queries, and what information do they require immediately? After completing the preliminary investigation and planning to determine a viable channel concept, you may utilize ChatGPT to develop scripts for educational videos and an AI video tool to produce scenes that correspond with the script. To eliminate the need to be seen or heard, you can even use AI voiceover technologies.

This year, whatever concept are you planning to try? With ChatGPT’s assistance, what side project will you start to change the world more quickly and earn a sizable income at the same time?

 

Continue Reading

Tech

Apple reaffirms its commitment to privacy, presenting a challenge to Samsung and Google

Published

on

Apple_announce-iphone12pro_10132020.jpg.og

The conflict between the iPhone and Android is about to heat up—possibly to a level never before seen. Additionally, if you use a premium Samsung device, a recent remark unexpectedly made by Apple should encourage you to consider switching.

This week, in a galaxy far, far away, Apple made a major announcement that will greatly affect the approaching battle between its iPhone and Android rivals, primarily Samsung, at the high end of the market.

Apple claimed that some businesses “regularly scan personal information in the cloud to monetize their users’ information.” Apple doesn’t. We’ve taken a completely different route, one that puts our users’ security and privacy first.

The claim was made in Australia, another country considering imposing a monitoring mandate on internet service providers to identify inappropriate activity within its user population.

The nation’s eSafety plans, which aim to prevent child sexual abuse and terrorism/radicalization, seem to contradict one another by protecting user privacy and security while actively tracking user content (in some capacity).

Apple said in a statement obtained by The Guardian that “scanning every user’s privately stored iCloud data would pose serious security and privacy problems.” “Bulk surveillance of communications and storage systems is made possible by scanning for specific content.

There are two noteworthy aspects to this statement. Apple is drawing attention to the risk that technology suppliers lose their main line of defense against scope creep when they technically violate the security of end-to-end encryption. To put it simply, political dissent and sexual liberty follow CSAM or radicalization.

Apple issued a warning, saying that “tools of mass surveillance have widespread negative implications for freedom of opinion and expression and, by extension, democracy as a whole.” For instance, there is a real risk that knowledge of the possibility that the government could order a provider to monitor user behavior could stifle acceptable political, expressive, and associational freedoms as well as economic activities.

This is noteworthy since, in 2021, I and others made the same point when Apple suggested device-side scanning for CSAM: “At the insistence of the governments where Apple sells its devices, Apple will be pressed into expanding its CSAM screening to look for other content.” Such demands have previously been denied by Apple because they are technically impractical. That has, however, abruptly changed.

Apple eventually changed its mind and abandoned its plans for device-side scanning because of the intense criticism that its intentions were so un-Apple.

Taking aside the irony of this much-needed retreat, Apple’s most recent privacy affirmation is noteworthy because it will soon become a major differentiation in the ecosystem between Apple and Google: private on-device processing versus open cloud processing. That implies a device-based competition between Apple and Samsung.

Google checks other stuff, including images saved in its cloud, for CSAM and other materials. For further categories of user data classification, it also leverages the cloud. As Gemini is being rolled out throughout Google’s ecosystem, along with several alerts informing users that their work will probably be saved in the cloud and might be reviewed by humans, this is also generating news right now.

This is not what Apple wants to do. Its strategy is to operate AI on its devices, within the applicable end-to-end encryption bubble, and to keep itself from accessing any content that is saved on the cloud. Although this is far more difficult than the economies of scale associated with cloud processing, Apple runs the theory if it can.

Apple seems to be comparing the effectiveness of device-side generative AI against top cloud-side options, as I revealed last month. We thus have a coming parallel universe wherein Google pushes its alternative, centered around privacy alerts and advice notifications over what data is utilized and kept online, and iOS pushes its inevitable generative AI based around device privacy and security.

Samsung is making a lot of noise about the Galaxy one day into the yearly Mobile World Congress in Barcelona, and it’s all about AI. The business released a statement saying, “The era of mobile AI has come, and early adopters are already riding the waves of the latest innovation.”

Through a “try before you buy” app, Samsung has been promoting Galaxy smartphones to iPhone customers for a long time. It is now making that available to Android users. “Samsung has made major updates to the Try Galaxy app to spark the conversation and enable those who are still undecided to experience Galaxy AI without switching phones.”

On the first day of MWC, however, the major news was the official launch of Gemini on Google Messages. The business said that “Gemini in Google Messages is being released gradually and only to Google Messages beta testers for now.” Nevertheless, “you can chat with Gemini in the Google Messages app to draft messages, brainstorm ideas, plan events, or simply have a fun conversation.”

But for Samsung, this resolves the conflict. Samsung has described Galaxy AI as a “hybrid approach” between device and cloud AI, and it is free to promote the technology as much as it wants. But in the end, they are Android gadgets, and given Google’s greater AI capabilities, its ecosystem control and AI will prevail. 

Thus, Google’s problems turn become Samsung’s problems, and the Apple vs. Google argument never ends.

From a gadget standpoint, there has never been a closer comparison between iPhones and their Samsung counterparts in terms of functionality, features, and performance, in my opinion. However, a previously unseen degree of differentiation will be brought forth by the AI tidal wave. Furthermore, if you intend to spend $1000 to $2000 on a smartphone, I would need complete privacy features.

Outside of my encrypted cocoon, I would not want an ecosystem that might mistakenly identify content or provide a danger of cloud-side data intrusions. I would like to know exactly what the underlying principles are about the sovereignty and protection of my content. Furthermore, a non-compromising technological assurance is what I would prefer.Furthermore, Android security upgrades are similarly disorganized as Apple’s, and there are far more hazardous malware attacks (some of which have specifically targeted Samsung devices).

Therefore, in 2024, the case for iPhones at the high end of the market is stronger than ever, even with Google’s AI head start and rapid release of AI features, as generative AI strives to permanently alter our smartphones. Perhaps this helps to explain why all seven of the most well-liked high-end gadgets from the previous year were iPhones.

Continue Reading

Trending