AI - ReadWrite https://readwrite.com/category/ai/ Crypto, Gaming & Emerging Tech News Tue, 19 Mar 2024 16:28:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://readwrite.com/wp-content/uploads/2024/03/star-nw.svg AI - ReadWrite https://readwrite.com/category/ai/ 32 32 Workplace AI negatively impacts quality of life, study finds https://readwrite.com/workplace-ai-negatively-impacts-quality-of-life-study-finds/ Tue, 19 Mar 2024 16:21:06 +0000 https://readwrite.com/?p=262224 A futuristic office environment where employees are wearing sleek, modern headsets that are their personal AI assistants., 3d render

A new study that looks into the influence of artificial intelligence in the workplace has shown negative results, indicating a… Continue reading Workplace AI negatively impacts quality of life, study finds

The post Workplace AI negatively impacts quality of life, study finds appeared first on ReadWrite.

]]>
A futuristic office environment where employees are wearing sleek, modern headsets that are their personal AI assistants., 3d render

A new study that looks into the influence of artificial intelligence in the workplace has shown negative results, indicating a decline in quality of life while the use of AI rises.

The U.K.-based Institute for the Future of Work (IFOW) has looked into the link between worker wellbeing and technology exposure, with almost 5,000 people who are representative of the working population being analyzed.

Results have shown that “quality of life negatively correlated with frequency of interaction with newer workplace technologies such as wearables, robotics, AI and ML software.”

The team goes on to say in their published report that “this is consistent with research that connects such technologies to exacerbated feelings of disempowerment, increased sense of insecurity, task intensification and stress and loss of meaning, as well as anxiety and poorer overall health.”

Not all technologies have a negative impact though, with some actually helping with employee wellbeing.

“Results showed that digital information and communication technologies correlated with improved quality of life, whereas newer and more advanced technologies were correlated with reduced wellbeing.”

So computers and messaging tools have been proven within this research to bring more freedom and flexibility to workers, but the use of smart devices or AI isn’t as well received.

Use of AI in the workplace

One of the main focuses of businesses appears to be around the productivity of their employees, with the use of artificial intelligence being implemented to handle repetitive tasks or speed up processes.

Some are even opting to replace workers with AI completely, including the likes of tech giant IBM. Back in 2023, the CEO publicly announced plans to replace nearly 8,000 jobs with technology.

At the time, he noted that back-office functions, specifically in the human resources sector, would be the first to face these changes.

The post Workplace AI negatively impacts quality of life, study finds appeared first on ReadWrite.

]]>
Pexels
AI wearable device to help people speak without vocal cords https://readwrite.com/ai-wearable-device-to-help-people-speak-without-vocal-cords/ Tue, 19 Mar 2024 15:26:12 +0000 https://readwrite.com/?p=262189 A stunning 3D render of a vocal chord synthesis display. The screen shows an intricate, colorful diagram of the vocal cords, with a vibrant energy flowing through them as sound waves radiate outward. The background is a futuristic, dark blue room with holographic projections and glowing controls, giving off a sense of advanced technology and scientific exploration., 3d render

It could soon be possible to speak without vocal cords, as engineers are focusing on a square-inch device that can… Continue reading AI wearable device to help people speak without vocal cords

The post AI wearable device to help people speak without vocal cords appeared first on ReadWrite.

]]>
A stunning 3D render of a vocal chord synthesis display. The screen shows an intricate, colorful diagram of the vocal cords, with a vibrant energy flowing through them as sound waves radiate outward. The background is a futuristic, dark blue room with holographic projections and glowing controls, giving off a sense of advanced technology and scientific exploration., 3d render

It could soon be possible to speak without vocal cords, as engineers are focusing on a square-inch device that can help people regain their voice.

This could be extremely beneficial for people with pathological vocal cord conditions or those who are recovering from laryngeal cancer surgery as this tiny new patch is being developed by engineers at the University of California.

In a journal published in Nature Communications last week (Mar. 12), the researchers outlined their invention and the results of testing the device on eight adults.

They describe the wearable tool as being a “self-powered sensing-actuation system” based on soft magnetoelasticity to enable assisted speaking. It works by patients articulating sentences through muscle movements that are usually used in regular speech or lip-synching.

The sensing component of the device then recognizes the movements without the vibration of vocal folds. Electrical signals are then fed to a pre-trained machine-learning model that converts throat movement into voice signals.

Other solutions for helping people without vocal cords to speak are available but these largely include handheld electro-larynx devices or tracheoesophageal-puncture procedures. Both of these options could be considered to be inconvenient and uncomfortable.

How does it work?

The system was attached to the throat of the participants and data on laryngeal muscle movement was collected, with a machine-learning algorithm used to correlate the resulting signals to specific words.

The research team had the patients pronounce five sentences – both out loud and voicelessly – including ‘I don’t trust you’ and ‘I love you.’ Each person had to repeat the five sentences 100 times for data collection purposes.

The overall prediction accuracy of the model was 94.68%, working well across the different people involved.

Going forward, the research team plans to expand the vocabulary of the device through machine learning and test their findings in people with speech disorders. 

The post AI wearable device to help people speak without vocal cords appeared first on ReadWrite.

]]>
Pexels
A cyberpunk game with AI actors that make up their own dialogue? Count us in https://readwrite.com/a-cyberpunk-game-with-ai-actors-that-make-up-their-own-dialogue-count-us-in/ Tue, 19 Mar 2024 15:53:11 +0000 https://readwrite.com/?p=262268 An image of one of Convai AI's Smart NPCs

It seems as though complaints about wooden acting and poor NPC dialogue will become a thing of the past as… Continue reading A cyberpunk game with AI actors that make up their own dialogue? Count us in

The post A cyberpunk game with AI actors that make up their own dialogue? Count us in appeared first on ReadWrite.

]]>
An image of one of Convai AI's Smart NPCs

It seems as though complaints about wooden acting and poor NPC dialogue will become a thing of the past as gaming development continues to use more and more generative AI in its processes. We aren’t there yet, but by the end of the year, we will probably have our first game with NPC characters generating their own realistic dialogue depending on what you ask them.

For gaming realism, this could be a game changer, and while it is unclear where it may actually lead us, Unity and generative AI company Convai are keen to give us a sneak peek into the future with the release of Project Neural Nexus – a demo “game” which will feature smart NPCs and Unity’s MUSE behavior tool.

“Step into the world of Neo City, a cyberpunk theme city where danger lurks at every corner. You wake up at a hotel and are being chased by the police and killer robots. Your objective is to make out of the hotel filled with assassins – alive. But you are not alone. You have help from your AI companion, who can equip you with weapons, teach you how to shoot, guide you at every step and is helping you get out of the hotel. Your decisions change the storyline but be careful, one wrong step and it’s game over. The game is powered by Convai’s smart NPCs where you can have open-ended conversations, make decisions, perform actions, interact with various environmental objects, and are guided by an AI companion. Welcome to Neo City. Your survival story begins now.”

That’s how Convai is positioning PNN and is more of a tech demonstration of where gaming could be headed and headed quickly. The problem will be if an NPC can talk to you about anything, and you can ask it anything, how do you keep the story on track? The idea of an NPC answering questions more realistically is appealing. What we see in demonstrations like this is the “game” being played the way the developers intend. The general public will not necessarily play as nicely.

When you watch the video there is no doubt it is exciting and impressive in equal amounts. Text to Speech might not still be quite there yet when it comes to vocal emotion and some of the dialogue sounds a little stilted but the potential is there for all to see. Especially as we move along towards a world that has GPT5 in it.

You can sign up for early access now and it should be released later in the year. Interesting times ahead

The post A cyberpunk game with AI actors that make up their own dialogue? Count us in appeared first on ReadWrite.

]]>
Pexels
Sam Altman calls out Elon Musk for taking swipe at Jeff Bezos https://readwrite.com/sam-altman-adds-to-disdain-for-elon-musk-taking-swipe-at-gesture-toward-jeff-bezos/ Tue, 19 Mar 2024 12:44:10 +0000 https://readwrite.com/?p=261976 Headshots of Sam Altman and Elon Musk on an austere black and white background with the openai logo above Elon's head.

OpenAI CEO Sam Altman has reiterated his disdain for Elon Musk as he commented on a gesture the Tesla billionaire… Continue reading Sam Altman calls out Elon Musk for taking swipe at Jeff Bezos

The post Sam Altman calls out Elon Musk for taking swipe at Jeff Bezos appeared first on ReadWrite.

]]>
Headshots of Sam Altman and Elon Musk on an austere black and white background with the openai logo above Elon's head.

OpenAI CEO Sam Altman has reiterated his disdain for Elon Musk as he commented on a gesture the Tesla billionaire made toward Amazon chief Jeff Bezos in 2021.

When Musk surpassed Bezos as the richest person in the world, he sent him an emoji of a silver medal to outline his own new-found status in a manner typical of the controversial South African-born entrepreneur.

Altman’s comments, during an interview with podcaster Lex Fridman, extended to complaints over a lack of collaboration on artificial intelligence (AI). His comments mark another chapter in the ongoing dispute between the influential pair, on the back of recent legal action taken by Musk to sue Altman and OpenAI for supposedly reneging on its original mission of developing responsible AI technology.

As expected, the Microsoft-backed company made a court filing to counter the lawsuit.

What did Sam Altman say about Elon Musk?

In a conversation on the Lez Fridman podcast, the host put to Altman that all of the AI heavyweights should be working together on matters of interest and toward the goal of artificial general intelligence (AGI) but the boss of the ChatGPT maker said this wasn’t “really the thing [Musk is] most known for.”

He continued, “I was thinking, someone just reminded me the other day about how the day that he surpassed Jeff Bezos for richest person in the world, he tweeted a silver medal at Jeff Bezos.”

“I hope we have less stuff like that as people start to work towards AGI,” said Altman.

Bezos would later retake the number one spot, after a surge in the price of Amazon shares, but the oneupmanship witnessed in this incident is a theme that extends to Musk’s business dealings and relationships, just like he wants to have the upper hand on OpenAI, the company he co-founded with Altman before a parting of the ways led to the creation of his own xAI startup.

This week, it confirmed the open release of its Grok artificial intelligence (AI) model, detailing the weights and architecture of the system.

Image credit: Canva

The post Sam Altman calls out Elon Musk for taking swipe at Jeff Bezos appeared first on ReadWrite.

]]>
Pexels
OpenAI CEO Sam Altman says GPT-4 ‘kinda sucks’, hints at GPT-5 boost https://readwrite.com/openai-ceo-sam-altman-says-gpt-4-kinda-sucks-hints-at-gpt-5-boost/ Tue, 19 Mar 2024 12:05:47 +0000 https://readwrite.com/?p=261992 A black and white headshot of Sam Altman next to the OpenAI logo on a blue background

OpenAI CEO Sam Altman has taken a very honest approach as he suggests GPT-4, the latest AI model from the… Continue reading OpenAI CEO Sam Altman says GPT-4 ‘kinda sucks’, hints at GPT-5 boost

The post OpenAI CEO Sam Altman says GPT-4 ‘kinda sucks’, hints at GPT-5 boost appeared first on ReadWrite.

]]>
A black and white headshot of Sam Altman next to the OpenAI logo on a blue background

OpenAI CEO Sam Altman has taken a very honest approach as he suggests GPT-4, the latest AI model from the team, isn’t as impressive as the world is making out. 

In an interview with Lex Friedman, published on Monday (Mar. 18), he said “I think it kind of sucks,” when asked about GPT-4 and its capabilities.

The OpenAI boss, 38, went on to say: “I think it is an amazing thing, but relative to where we need to get to and where I believe we will get to, at the time of like GPT-3, people were like ‘oh this is amazing, this is like marvel of technology’ and it is, it was.

“But now we have GPT-4 and look at GPT-3 and you’re like that’s unimaginably horrible.”

In the process of creating new models, the technology mastermind implies that critiquing former updates is key: “I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck, looking backward at them, and that’s how we make sure the future is better.”

Altman explains how he takes inspiration from the GPT-4 and uses it as a ‘brainstorming partner.’

While this model hasn’t been praised too highly by its company’s founder, it has amassed 180 million weekly users for OpenAI and has helped to drive generative AI into the mainstream consciousness.

What can we expect from GPT-5

When asked about the date of GPT-5, Altman answered “I don’t know. That’s an honest answer.

“We will release an amazing model this year. I don’t know what we’ll call it.”

In discussing the leap from GPT-4 to a potential GPT-5, he said: “I’m excited about it being smarter…

“It’s getting like better across the board.”

The billionaire vowed there would be ‘many different things’ that people will see over the coming months, but these could be different from the GPT-5 model everyone is expecting. And, despite the interviewer’s best efforts, he wouldn’t give away any details in what is a highly competitive sector. 

This will likely further fuel rumors as to future updates like a GPT-5, GPT 4.5, and GPT-4.5 Turbo.

Altman previously made guarded comments on the importance of mitigating the dangers of artificial intelligence (AI), as he revealed some concerns about the technology keep him awake at night.

Featured Image: Canva

The post OpenAI CEO Sam Altman says GPT-4 ‘kinda sucks’, hints at GPT-5 boost appeared first on ReadWrite.

]]>
Pexels
6 Ways AI is Changing Real Estate https://readwrite.com/ai-changing-real-estate/ Tue, 19 Mar 2024 14:16:54 +0000 https://readwrite.com/?p=260954 AI in real estate

The real estate industry, known for its reliance on traditional methods, is undergoing a major shift thanks to the integration… Continue reading 6 Ways AI is Changing Real Estate

The post 6 Ways AI is Changing Real Estate appeared first on ReadWrite.

]]>
AI in real estate

The real estate industry, known for its reliance on traditional methods, is undergoing a major shift thanks to the integration of AI. From property valuations to customer service, AI is not only streamlining operations but also enhancing the customer experience and reshaping business strategies.

Understanding AI’s Influence in Real Estate

AI currently has its fingers on every major industry – instigating big changes from the inside out of sectors that have been stuck in their ways for 20-plus years. But it could be argued that there’s no industry with more potential for major, sweeping change than real estate.

Let’s explore a few of the big ways AI is currently changing real estate, along with peeking at some of the ways it will continue to evolve moving forward.

1. AI-Driven Analytics for Property Valuations

Accurately valuing a property is both an art and a science. It has traditionally relied on a blend of physical inspections and market trend analysis. However, AI is proactively changing this process by introducing AI-driven analytics, offering a new level of precision and efficiency.

At the core of AI-driven property valuations are sophisticated algorithms capable of sifting through extensive datasets, including historical transaction prices, current market trends, property features, and even socio-economic indicators. 

Unlike traditional appraisal methods, which might depend heavily on the appraiser’s experience and available local data, AI systems can analyze national and global data trends to make more accurate predictions. 

Practical applications and use cases of AI in property valuations are already evident across the real estate industry. For example, Zillow’s “Zestimate” leverages AI to provide instant property valuations based on public and user-submitted data. Similarly, Redfin uses machine learning models to deliver accurate home value estimates, taking into account thousands of data points.

Another example is the use of AI by banks and financial institutions in their appraisal processes for mortgage lending. By integrating AI analytics, they quickly assess a property’s value, making the loan approval process faster and more efficient.

2. Enhancing Customer Service with AI

AI has been fast at work over the past few years, using advanced chatbots and virtual assistants that are capable of handling client inquiries quickly in real-time, regardless of the time of day.

Several different companies leverage AI chatbots and digital assistants, including HouseJet. Their platform has two different AI technologies: MAX AI and LISA AI. MAX is an AI-powered lead generation tool that optimizes real estate agent marketing strategies to help them improve the number (and quality) of leads they bring in on a daily basis. Then, LISA acts as a digital AI assistant that texts clients on the agent’s behalf to set appointments. This frees the agent up to actually go to appointments, take calls, and spend more time with family during “off” hours.

This is the new approach that agents will be using moving forward. AI won’t necessarily be the one showing the house or doing the negotiations – we still need real, breathing agents for that – but it will help offload some of the time-consuming administrative tasks that eat up a realtor’s schedule. 

3. AI in Property Management

When it comes to property management, AI has the ability to automate routine and time-consuming tasks. This includes everything from sorting and responding to tenant inquiries to managing lease renewals and payments. By automating these processes, AI frees up property managers to focus on higher-level strategic aspects of their role, such as tenant relations and property improvements.

For example, AI-powered systems can prioritize maintenance requests based on urgency and schedule them automatically with the service providers. Technology is also making strides in energy management within properties, using smart algorithms to optimize heating, cooling, and lighting systems. These systems can analyze usage patterns, weather forecasts, and even occupancy rates to adjust settings in real-time, reducing energy consumption and costs.

Beyond routine maintenance, AI is paving the way for predictive maintenance. By analyzing data from various sensors and systems within a building, AI can predict when equipment might fail or when structures may need repairs before they become critical issues. This proactive approach can save huge amounts of money in emergency repairs and downtime, ensuring that properties remain in top condition and enjoy optimum cash flow.

4. AI and Real Estate Marketing

One of the big areas where AI excels (in a marketing context) is in its ability to analyze large volumes of data, including browsing habits, interaction rates, and past purchasing behavior. Based on this, it can create highly targeted marketing campaigns. This means real estate agents can now send property listings that align closely with a client’s specific preferences, such as location, price range, and property features (without spending hours browsing listings or sifting through the MLS). This increases the likelihood of engagement and response, which makes both the client and the agent happy.

Redfin recently released a new AI-powered tool that improves listing photos by allowing home buyers to see what they could do with a space by just clicking a few buttons. This shows them different flooring types, paint colors, countertop finishes, etc. As a result, even outdated homes get a fair shake when listed.

On top of all of this, AI-driven marketing tools can segment audiences with surgical-like precision, ensuring that marketing messages are tailored to different groups based on their unique characteristics and interests. This level of personalization enhances the client experience and improves the efficiency of marketing efforts. The result is better ROI across the board. 

5. Virtual Property Tours and AI

Virtual property tours are not entirely new; however, AI has created a whole new world of opportunities and advantages. Traditionally, virtual tours have been simple panoramic images or video walkthroughs. Now, with AI, these tours are highly interactive and personalized experiences.

AI can tailor the virtual tour to highlight features specific to a client’s preferences, such as focusing on the kitchen. Or if another client is big on having outdoor living, the virtual tour can revolve around the outdoor space.

Not only that, but AI-driven virtual tours can provide real-time information overlay, such as material finishes, appliance details, or even renovation possibilities. This level of interaction and information has never been possible before now.

AI also enhances virtual tours by making them more engaging and informative. Chatbots can answer questions, provide additional property details, and guide potential buyers to areas of the home that match their interests. This gives home buyers more flexibility in how they decide to purchase a home.

6. AI in Real Estate Financing

The loan processing stage has historically been defined by lots of paperwork and waiting periods. Now, thanks to AI, it’s seeing significant improvements in efficiency. AI algorithms are capable of rapidly analyzing vast amounts of data like credit scores, income history, and debt ratios. This can expedite the pre-approval and application processes.

We’re also seeing some advances in the industry where AI is able to automate document verification and fraud detection. This speeds up the process and allows human loan officers to focus on more complex cases or customer service.

It’s also worth noting that AI is giving lenders more sophisticated risk assessment models to use in the underwriting and loan approval processes. Whereas traditional lending models often rely on a narrow set of criteria, AI can incorporate a broader range of factors. This includes non-traditional credit data to further assess a borrower’s creditworthiness. 

The result of all of this is more nuanced risk profiles. In some cases, this opens up financing opportunities for individuals who might be marginalized by traditional models. For example, this would include freelancers with irregular income streams or first-time buyers with limited credit history.

The Future is Bright for AI in Real Estate

If you were to fast forward 25 years and look at this current moment in time, it would become overwhelmingly evident that we’re only at the very origins of the AI journey in real estate. With as much innovation as has occurred over the past two to three years, it’s easy to feel like we’ve arrived. However, the proverbial train is only just now leaving the station. 

Now’s the time for real estate professionals to really lean in and understand what’s happening, so they can grow along with the technology that’s feeding this industry!

The post 6 Ways AI is Changing Real Estate appeared first on ReadWrite.

]]>
Pexels
ByteDance researcher mistakenly added to US safety body for AI https://readwrite.com/bytedance-researcher-mistakenly-added-to-us-safety-body-for-ai/ Tue, 19 Mar 2024 13:21:48 +0000 https://readwrite.com/?p=261997 Bytedancer researcher mistakely added to

A US safety body for artificial intelligence (AI) experts has admitted a researcher from China-owned ByteDance was mistakenly added to… Continue reading ByteDance researcher mistakenly added to US safety body for AI

The post ByteDance researcher mistakenly added to US safety body for AI appeared first on ReadWrite.

]]>
Bytedancer researcher mistakely added to

A US safety body for artificial intelligence (AI) experts has admitted a researcher from China-owned ByteDance was mistakenly added to an online discussion channel.

The worker, employed by the parent company of TikTok, was granted access to a Slack group chat used by members of the US National Institute of Standards and Technology (NIST).

It is an embarrassing incident for the organization given that TikTok is currently embroiled in a high-profile national debate in the States on whether or not the popular video-hosting app is an asset for the Chinese government to spy on, or influence a significant number of American citizens.

TikTok is said to have almost 150 million users in the US, its most important market, with the company ready to strongly defend its status after the House of Representatives passed a bill that could result in the social media platform being banned in the US within six months unless the company divests from its Chinese ownership.

The proposed legislation will now proceed to the Senate, where the outcome is less predictable.

A spokesman for the Beijing government implored the US has “never found evidence that TikTok threatens national security,” and the prohibitive action “will inevitably come back to bite the United States itself.”

How has NIST responded to the news?

The researcher, supposedly based in California, was added to a Slack conversation between members of NIST’s US Artificial Intelligence Safety Institute Consortium, which is said to be made up of around 850 users.

An email from the safety body has set out how it has reacted to the mishap.

“Once NIST became aware that the individual was an employee of ByteDance, they were swiftly removed for violating the consortium’s code of conduct on misrepresentation,” it read.

The specific working group on AI safety developments is a multi-agency collaboration, set up to address concerns and measure risks of the evolving technology. The Institute comprises involvement from various American big tech firms, researchers, startups, NGOs, and others.

Image credit: Ideogram

The post ByteDance researcher mistakenly added to US safety body for AI appeared first on ReadWrite.

]]>
Pexels
YouTube rolls out labeling for ‘realistic’ AI content https://readwrite.com/youtube-rolls-out-labeling-for-realistic-ai-content/ Tue, 19 Mar 2024 12:55:33 +0000 https://readwrite.com/?p=262014 A futuristic scene of a person watching YouTube videos on a large holographic screen, with an AI-powered virtual assistant nearby

In this new age of artificial intelligence, platforms worldwide have seen a rise in AI-generated content. To help viewers better… Continue reading YouTube rolls out labeling for ‘realistic’ AI content

The post YouTube rolls out labeling for ‘realistic’ AI content appeared first on ReadWrite.

]]>
A futuristic scene of a person watching YouTube videos on a large holographic screen, with an AI-powered virtual assistant nearby

In this new age of artificial intelligence, platforms worldwide have seen a rise in AI-generated content. To help viewers better understand if what they’ve watched has been created by a human or otherwise, YouTube has outlined some new disclosures.

Shared via a blog post published yesterday (Mar. 18) on the YouTube Official Blog, a new tool has been released.

In the Creator Studio, people should now disclose to viewers when realistic content is made with altered or synthetic media, including generative AI.

These disclosures will appear as labels that will be visible in the expanded description or on the front of the video player. YouTube says this applies to “content a viewer could easily mistake for a real person, place, or event…”

The labeling won’t apply to all AI videos though, as the platform says: “We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used Generative AI for production assistance.”

Some examples of videos that require disclosure include:

  • Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video.
  • Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than in reality.
  • Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.

They intend for this to ‘strengthen transparency with viewers and build trust between creators and their audience.’

When will the labels be rolled out

The labels are said to be rolled out across all YouTube surfaces and formats in the week ahead. This will begin with the YouTube app and then on desktop and TV.

While creators will be the ones responsible for correctly marking their videos, the online video-sharing platform says in the future “we’ll look at enforcement measures for creators who consistently choose not to disclose this information.

“In some cases, YouTube may add a label even when a creator hasn’t disclosed it, especially if the altered or synthetic content has the potential to confuse or mislead people.”

 

The post YouTube rolls out labeling for ‘realistic’ AI content appeared first on ReadWrite.

]]>
Pexels
ChatGPT-5: release date, price, and what we know so far https://readwrite.com/chatgpt-5-release-date-price-and-what-we-know-so-far/ Tue, 13 Feb 2024 15:39:38 +0000 https://readwrite.com/?p=254242 ChatGPT-5 release date, price, and what we know so far. Purple OpenAI logo behind illustration of man and machine, and rows of data servers

In a recent conversation between the CEOs of Microsoft and OpenAI, it was revealed by Sam Altman that ChatGPT-5 is… Continue reading ChatGPT-5: release date, price, and what we know so far

The post ChatGPT-5: release date, price, and what we know so far appeared first on ReadWrite.

]]>
ChatGPT-5 release date, price, and what we know so far. Purple OpenAI logo behind illustration of man and machine, and rows of data servers

In a recent conversation between the CEOs of Microsoft and OpenAI, it was revealed by Sam Altman that ChatGPT-5 is expected to receive significant updates to its speech, images, and eventually video capabilities.

On his “Unconfuse Me” podcast, Bill Gates, along with Altman, explored the future of artificial intelligence, including its improved reasoning ability and general reliability. “Multimodality will be important,” Altman said, hinting at a future where artificial intelligence (AI) can perform increasingly complex tasks and potentially reshape various sectors, including programming, healthcare, and education.

Anticipation is building for the next iteration of ChatGPT, known as GPT-5. This advanced large language model is seen as a crucial milestone on the path to achieving artificial general intelligence (AGI), enabling machines to mimic human thought processes.

Here’s what to expect with the next version of GPT.

What is ChatGPT-5?

OpenAI says ChatGPT-5 “will be a state-of-the-art language model that makes it feel like you are communicating with a person rather than a machine.”

GPT-5 marks the next generation of the company’s Generative Pre-trained Transformer language model. OpenAI claim it represents a major advancement in natural language processing capabilities. With its more human-like ability to comprehend and produce text, GPT-5 could transform how we communicate with machines and automate numerous language-related jobs.

Will there be a ChatGPT-5 and what can it do?

As Altman has suggested, ChatGPT-5 is already in development as an updated version of its predecessor, GPT-4. The OpenAI CEO stated, “Right now, GPT-4 can reason only in extremely limited ways, and its reliability is also limited,” hence the aim is to improve its current functionality.

GPT, which stands for “Generative Pre-trained Transformer,” is a deep learning-based language model designed to produce text that resembles human writing. It boasts more natural language processing skills and finds widespread use across numerous applications.

On top of being dependable, Altman stipulated that “customizability and personalization will also be very important.

“People want very different things out of GPT-4; different styles, different sets of assumptions – we’ll make all that possible,” he added.

Altman highlighted that GPT-5’s ability to utilize personal data, including understanding emails, calendar details, appointment scheduling preferences, and integrating with external data sources, will be among the key advancements.

Multi-modal AI is designed to learn from and use a variety of content types such as images, audio, video, and numerical data. OpenAI has stated that GPT-4 is a multi-modal model, capable of processing both text and image inputs, although it is restricted to generating outputs in text form only, but GPT-5 would use more data to train on.

“We launched images and audio, and it had a much stronger response than we expected. We’ll be able to push that much further, but maybe the most important areas of progress will be around reasoning ability,” Altman told Gates on his podcast.

OpenAI has already indicated that it is working on a “supersmart” assistant to run a computer for its user. It is said to rival Microsoft and Google’s own AI workplace assistant but these programs are said to be in their infancy.

When will ChatGPT-5 be released?

However, Altman has not revealed a specific date for its release. He told the Financial Times in November that teams were working on the large language model, but did not state when this would be due.

Speaking at the World Governments Summit (WGS) in Dubai in February, Altman then reiterated that ChatGPT-5 is “going to be smarter.”

“It’s not like that this model is going to get a little bit better, it’s because we’re going to make them all smarter, it’s going to be better across the board,” he continued. He also spoke to Bloomberg, saying that he expected the company to “take its time” and make sure it can launch a product that they can feel “good about and responsible about.”

On March 19 Altman gave another update on the status of GPT-5 telling Lex Fridman on a podcast,We will release an amazing model this year. I don’t know what we’ll call it.”

The OpenAI CEO also hinted at a massive boost in capability between GPT-4 and GPT-5 saying the difference between the two models would be as large as the upgrade between GPT-3 and GPT-4.

https://twitter.com/SmokeAwayyy/status/1769788180306248066/video/1

On at least two occasions last fall, Altman affirmed that OpenAI was actively developing GPT-5.

The initial confirmation came during a speech at the alumni reunion of Y Combinator, his former venture capital firm, last September, as corroborated by two attendees. At that event, Altman stated that GPT-5 and its successor GPT-6 “were in the bag,” implying their development was assured and that they would surpass the capabilities of previous models.

Will ChatGPT-5 be free?

While there is a free version of ChatGPT, it is unclear whether ChatGPT-5 will require a subscription-like its predecessor. The ChatGPT Plus subscription plan is $20 a month, providing subscribers with exclusive benefits including priority access during high-traffic periods, enhanced response times, the ability to use plugins, and exclusive access to GPT-4. Users also have access to its in-house AI image model DALL·E.

It’s also important to note that current language models are already expensive to train and maintain. This means that when GPT-5 is eventually released, access to it will likely require a subscription to ChatGPT Plus or Copilot Pro.

Ultimately, the launch of GPT-5 could lead to GPT-4 becoming more affordable and accessible. In the past, the high cost of GPT-4 has deterred a number of users. However, once it becomes cheaper and widely available, ChatGPT’s capability to handle complex tasks such as coding, translation, and research could significantly improve.

OpenAI has been approached for further comment.

Featured image: DALL·E / Canva

The post ChatGPT-5: release date, price, and what we know so far appeared first on ReadWrite.

]]>
Pexels
Nvidia debuts Earth-2 platform for enhanced climate forecasting https://readwrite.com/nvidia-debuts-earth-2-platform-for-enhanced-climate-forecasting/ Tue, 19 Mar 2024 00:16:36 +0000 https://readwrite.com/?p=261949 The Earth, centered within a complex digital interface of grids and holographic elements, symbolizes the fusion of technology and environmental science for climate analysis and forecasting

Nvidia today introduced its Earth-2 cloud platform at the GTC 2024 event in San Jose, California, aimed at advancing global… Continue reading Nvidia debuts Earth-2 platform for enhanced climate forecasting

The post Nvidia debuts Earth-2 platform for enhanced climate forecasting appeared first on ReadWrite.

]]>
The Earth, centered within a complex digital interface of grids and holographic elements, symbolizes the fusion of technology and environmental science for climate analysis and forecasting

Nvidia today introduced its Earth-2 cloud platform at the GTC 2024 event in San Jose, California, aimed at advancing global climate change predictions through AI-powered simulations. According to VentureBeat, the platform, which was first mentioned in 2021, has now reached full operational status.

During a keynote address, Nvidia CEO Jensen Huang highlighted Earth-2’s role in addressing the frequent occurrence of climate disasters such as droughts, hurricanes, and floods. The Earth-2 platform, according to Huang, offers APIs that enable large-scale simulation and visualization of weather and climate data, providing a tool to better prepare for and mitigate the effects of extreme weather events.

The technology behind Nvidia’s Earth-2

Earth-2 relies on a combination of Nvidia’s CUDA-X microservices software and AI models, including the CorrDiff generative AI model. This integration allows for simulations that are both faster and more energy-efficient than traditional numerical models. Rev Lebaredian, Nvidia’s vice president of simulation, underscored the economic impact of extreme weather, which results in $140 billion in losses annually, and the need for detailed, kilometer-scale simulations to effectively address this challenge.

One of the first organizations to utilize Earth-2 is Taiwan’s Central Weather Administration, which aims to use the platform’s models to improve typhoon forecasting and evacuation planning. Additionally, Earth-2’s capabilities are being extended through integration with Nvidia Omniverse, enabling users to develop 3D workflows that incorporate actual weather data into their digital twin environments.

The Weather Company, an entity focused on weather data forecasting and insights, announced plans to integrate Earth-2 APIs to enhance their weather modeling products and services. This collaboration is intended to provide more accurate weather intelligence for enterprise clients and improve the overall understanding and simulation of weather impacts.

At the GTC 2024 conference, Nvidia also showcased other innovations such as the NIM software platform for streamlined AI deployment, and advancements in humanoid robotics with Project GR00T.

The post Nvidia debuts Earth-2 platform for enhanced climate forecasting appeared first on ReadWrite.

]]>
Pexels
Nvidia launches NIM to simplify AI model deployment https://readwrite.com/nvidia-launches-nim-to-simplify-ai-model-deployment/ Mon, 18 Mar 2024 23:57:23 +0000 https://readwrite.com/?p=261944 A professional setting at Nvidia's GTC conference showcasing the Nvidia NIM software on a large monitor in a conference room. Technology enthusiasts and developers are engaged in discussion, highlighting the practical applications of AI deployment

At its GTC conference today, Nvidia unveiled NIM, a revolutionary software platform designed to seamlessly integrate both custom and pre-trained… Continue reading Nvidia launches NIM to simplify AI model deployment

The post Nvidia launches NIM to simplify AI model deployment appeared first on ReadWrite.

]]>
A professional setting at Nvidia's GTC conference showcasing the Nvidia NIM software on a large monitor in a conference room. Technology enthusiasts and developers are engaged in discussion, highlighting the practical applications of AI deployment

At its GTC conference today, Nvidia unveiled NIM, a revolutionary software platform designed to seamlessly integrate both custom and pre-trained AI models into production environments.

Alongside a number of announcements today at Nvidia’s GTC conference, NIM harnesses Nvidia’s expertise in AI model inferencing and optimization, offering a streamlined approach for developers. By merging AI models with an optimized inferencing engine and encapsulating them into containers accessible as microservices, NIM drastically reduces deployment time. According to TechCrunch reporting, what would traditionally take months can now be accomplished swiftly, bypassing the need for extensive in-house AI expertise.

This innovative platform supports models from notable entities such as NVIDIA, A121, and Getty Images, alongside open models from tech giants like Google and Meta. Nvidia’s collaboration with Amazon, Google, and Microsoft aims to integrate NIM microservices into major cloud services, enhancing accessibility for developers across the board.

NIM’s backbone: Nvidia’s inferencing engines

At the heart of NIM lies the Triton Inference Server, alongside TensorRT and TensorRT-LLM, underscoring Nvidia’s commitment to providing a robust foundation for AI applications. The platform also features specialized microservices, such as Riva for speech and translation adjustments, cuOpt for routing optimizations, and the Earth-2 model for simulations in weather and climate.

Manuvir Das, head of enterprise computing at Nvidia, emphasized the efficiency and enterprise-grade quality that NIM brings to the table, allowing developers to focus on building enterprise applications without the overhead of model management.

NIM stands as a testament to Nvidia’s vision of transforming enterprises into AI-driven entities, equipped with a suite of containerized AI microservices. With the backing of industry giants and an ecosystem of partners, Nvidia’s NIM is poised to revolutionize the way AI models are deployed and utilized across various sectors.

Jensen Huang, Nvidia’s CEO, highlighted the transformative potential of NIM, envisioning a future where every enterprise leverages AI to enhance their operations and innovation capacity.

The post Nvidia launches NIM to simplify AI model deployment appeared first on ReadWrite.

]]>
Pexels
Nvidia ventures into humanoid robotics with Project GR00T https://readwrite.com/nvidia-ventures-into-humanoid-robotics-with-project-gr00t/ Mon, 18 Mar 2024 23:40:07 +0000 https://readwrite.com/?p=261940 A wide-angle view of a technology conference featuring a humanoid robot engaging with attendees amid displays showcasing Nvidia's AI and robotics innovations, including the Jetson Thor computer and Isaac programs

Nvidia’s ambition to lead in the realm of AI extends beyond its renowned chip technology, as CEO Jensen Huang expresses… Continue reading Nvidia ventures into humanoid robotics with Project GR00T

The post Nvidia ventures into humanoid robotics with Project GR00T appeared first on ReadWrite.

]]>
A wide-angle view of a technology conference featuring a humanoid robot engaging with attendees amid displays showcasing Nvidia's AI and robotics innovations, including the Jetson Thor computer and Isaac programs

Nvidia’s ambition to lead in the realm of AI extends beyond its renowned chip technology, as CEO Jensen Huang expresses enthusiasm for developing foundation models for humanoid robots, according to a recent TechCrunch report. The technology giant is leveraging its influence in AI hardware to pioneer innovations in robotics, marked by the announcement of Project GR00T at its annual GTC developer conference. This initiative underscores Nvidia’s strategic move into the competitive sphere of humanoid robotics, hinting at its potential impact on future AI applications.

Project GR00T emerges as Nvidia’s response to the growing interest in humanoid robots, proposing a general-purpose foundation model aimed at supporting a wide array of companies in this sector, such as Agility Robotics, Boston Dynamics, and XPENG Robotics. This move signifies Nvidia’s commitment to facilitating advancements in robotics, offering a platform that promises to accelerate development across various applications, from industrial automation to daily human assistance.

According to TechCrunch, Jonathan Hurst of Agility Robotics and Geordie Rose of Sanctuary AI lauded the initiative, highlighting the transformative potential of human-centric robots like Digit in reshaping labor and addressing major societal challenges. Nvidia’s partnership approach, focusing on collaboration and shared innovation, sets a promising stage for the evolution of embodied AI.

Accompanying Project GR00T, Nvidia introduced Jetson Thor, a computing marvel designed for humanoid robotics applications. This new hardware, featuring Nvidia’s Blackwell architecture, emphasizes the company’s efforts to push the boundaries of AI and robotics performance, facilitating the development of more capable and versatile robots.

Nvidia also unveiled two additional programs: Isaac Manipulator and Isaac Perceptor, targeting advancements in robotic dexterity and vision processing. These initiatives reflect Nvidia’s comprehensive strategy to influence a broader range of robotic functionalities, from manufacturing automation to autonomous navigation.

The post Nvidia ventures into humanoid robotics with Project GR00T appeared first on ReadWrite.

]]>
Pexels
Nearly 40% of game devs create assets with AI, as turkeys seemingly vote for Christmas https://readwrite.com/nearly-40-of-game-devs-create-assets-with-ai-as-turkeys-seemingly-vote-for-christmas/ Mon, 18 Mar 2024 18:06:55 +0000 https://readwrite.com/?p=261880 An AI-generated image of a packed room full of turkeys on computers.

If you are one of those people who hates the idea of Artificial Intelligence (AI) being used to create the… Continue reading Nearly 40% of game devs create assets with AI, as turkeys seemingly vote for Christmas

The post Nearly 40% of game devs create assets with AI, as turkeys seemingly vote for Christmas appeared first on ReadWrite.

]]>
An AI-generated image of a packed room full of turkeys on computers.

If you are one of those people who hates the idea of Artificial Intelligence (AI) being used to create the games you love because it puts talented people out of work and replaces them with Chat-GPT’s overly enthusiastic writing or MidJourney’s seven-fingered monstrosities, you are going to hate this.

The new 2024 Unity Gaming Report has surveyed 7,062 workers in the game industry and “found that around 62% (4,378) of developers are leveraging AI tools during production” according to an analysis by GameDeveloper.com

Of those people, a further 63% (2,758) confirmed they used generative AI to help them create assets for the projects.

This means around 39% of everyone surveyed (7,062) is using AI to build assets for the game. It is super interesting as the games industry undergoes wave after wave of layoffs amid panic that AI will ultimately cost many jobs, that the very tools being blamed for redundancies are being flaunted by such a high number.

AI and generative AI should be being used as a tool and an aid, but whether decisions to bring our future robot overlords on board at this early stage are being made in the board room and being forced on the workers, or whether the developers themselves – the artists and programmers are embracing the tech to help them be more productive would be a wonderful thing to learn.

‘We’re empowering designers’

VP of product management, Kjell Bronder at Niantic (famous for Pokemon GO!), said:

“To increase our productivity, like many others, we’ve been really looking at AI and seeing how we can use this new generation of tools to empower our designers so they can create new assets to explore new ideas.”

“Also, how we can prototype new ways of characters, bring them to life, and have them animate in new ways.”

These empowered designers were obviously not part of the 25% of staff layoffs the company made in the middle of 2023.

Unity itself said, “While 2023 was a year marked with persisting economic headwinds, game developers are strategically navigating the challenges and successfully adapting to changes in technology, the economy, and player habits by doing more with less.” 

Presumably, the less includes the 1,800 people Unity itself let go at the start of the year. 

One thing is for sure, the better AI gets, the less work there will be for humans in the games industry. That is something we need to get our heads around and accept, but we, as the people who buy the games,  should not be expected to accept lower standards for profit.

Featured Image: AI-generated by Ideogram

The post Nearly 40% of game devs create assets with AI, as turkeys seemingly vote for Christmas appeared first on ReadWrite.

]]>
Pexels
India revises approval stance on AI model launches https://readwrite.com/india-revises-approval-stance-on-ai-model-launches/ Mon, 18 Mar 2024 15:41:47 +0000 https://readwrite.com/?p=261662 conceptual image showing the flag of India and holographic representation of AI, 3d render

Amidst a torrent of backlash from industry experts, India’s regulatory authorities have backtracked on their planned framework for the deployment… Continue reading India revises approval stance on AI model launches

The post India revises approval stance on AI model launches appeared first on ReadWrite.

]]>
conceptual image showing the flag of India and holographic representation of AI, 3d render

Amidst a torrent of backlash from industry experts, India’s regulatory authorities have backtracked on their planned framework for the deployment of cutting-edge artificial intelligence (AI) technologies.

An updated communication from the Ministry of Electronics and IT in recent days detailed that government approval would no longer be a prerequisite ahead of launching AI models to users in the domestic market.

It came after the government department expressed fears on interference in the democratic system in the South Asian state with the AI advisory issued to tech firms so their services and products “do not permit any bias or discrimination or threaten the integrity of the electoral process.”

India’s IT Deputy Minister Rajeev Chandrasekhar commented “signalling that this is the future of regulation,” adding “We are doing it as an advisory today asking you to comply with it.”

This instruction no longer applies and instead, companies will be asked to take steps to identify potential sources of bias by labelling under-tested or untrustworthy AI chatbots so that users are fully aware.

What is being asked of AI model creators under the new guidance?

The move represents another development, and something of a 360 degree turn, on India’s approach to its safeguards on emerging technology. Previously, the government was reluctant to regulate or interfere significantly on AI, but the short-lived advisory marked a change of direction.

An official document seen by Tech Crunch has set out the new approach in what is a sign of inconsistency from the Delhi administration. The letter outlined the need to adhere to existing Indian law and the imperative for the generated content to be free of bias, discrimination and any threats to democracy.

On any areas of potential unreliable AI prompts, creators have been asked to utilize “consent popups” or similar mechanisms to directly inform users. The Ministry stressed the importance of being able to identify deepfakes and misinformation, urging the use of specific identifiers or metadata to assist with this important purpose.

Image credit: Ideogram

The post India revises approval stance on AI model launches appeared first on ReadWrite.

]]>
Pexels
Apple is in talks to license Gemini AI for iPhones https://readwrite.com/apple-is-in-talks-to-license-gemini-ai-for-iphones/ Mon, 18 Mar 2024 12:25:03 +0000 https://readwrite.com/?p=261508 Rumours of Gemini AI for iPhones. Image created by AI, showing an iPhone with advanced AI features

The iPhone could soon have Google’s Gemini artificial intelligence (AI) to power new features, following chats between Apple and the… Continue reading Apple is in talks to license Gemini AI for iPhones

The post Apple is in talks to license Gemini AI for iPhones appeared first on ReadWrite.

]]>
Rumours of Gemini AI for iPhones. Image created by AI, showing an iPhone with advanced AI features

The iPhone could soon have Google’s Gemini artificial intelligence (AI) to power new features, following chats between Apple and the search giant.

According to a Bloomberg report, the two companies are reportedly in ‘active negotiations to let Apple license Gemini, coming to the iPhone software this year.’

Gemini is Google’s suite of generative AI tools, including chatbots and coding assistants. Over the years, Google is said to have paid Apple billions of dollars annually to make its search engine the default option in the Safari web browser on the iPhone and other devices.

A partnership between these two tech titans could give them a huge edge over the industry, but it seems more has to be done as the report says: “the two parties haven’t decided on the terms or branding of an AI agreement or finalized how it would be implemented.”

Apple has recently held discussions with OpenAI too, which is one of Gemini’s biggest competitors in the industry.

Neither Apple nor Google have confirmed or denied the potential for a collaboration.

Apple to focus on AI in the future

Coming in September 2024, Apple’s update – in the form of iOS 18 – is expected to be heavily focused on the use of AI.

Bloomberg’s Mark Gurman believes this could be a ‘relatively groundbreaking’ software update with ‘major new features and designs.’ Although the rumors have been circulating for the last few months, the Silicon Valley-based company hasn’t yet offered much in the way of sneak peeks.

This comes after news around Apple’s own testing of its large language model named Ajax. In 2023 we reported how the chatbot, built using Apple’s own framework named “Ajax,” is a large language model that utilizes AI to generate human-like responses. 

It’s believed that Ajax GPT was created for internal use and not much else has been heard about the tool over the last few months.

Featured Image: Via Ideogram

The post Apple is in talks to license Gemini AI for iPhones appeared first on ReadWrite.

]]>
Pexels
Elon Musk’s xAI open-sources base model of Grok https://readwrite.com/elon-musks-xai-open-sources-base-model-of-grok/ Mon, 18 Mar 2024 12:53:17 +0000 https://readwrite.com/?p=261634 Grok confirms its open source base model

xAI has confirmed the open release of its Grok artificial intelligence (AI) model, detailing the weights and architecture of the… Continue reading Elon Musk’s xAI open-sources base model of Grok

The post Elon Musk’s xAI open-sources base model of Grok appeared first on ReadWrite.

]]>
Grok confirms its open source base model

xAI has confirmed the open release of its Grok artificial intelligence (AI) model, detailing the weights and architecture of the system. 

Elon Musk’s company detailed the Grok-1 version as a “314 billion parameter Mixture-of-Experts model trained from scratch by xAI.” In addition, the San Francisco-based startup outlined how the model had been instructed on a large volume of text data rather than tailored toward any specific task.

No training code is required, as users can head directly to Git Hub to get started with the large language model (LLM). 

In an effort to solicit input from the research community on potential enhancements, companies like xAI can make their LLMs either fully open-source or available through limited open-source releases.

Grok has also been available to premium subscribers on X, the prominent social media platform formerly known as Twitter before Musk’s takeover.

What is the legal dispute between Elon Musk and OpenAI?

Last week, the billionaire announced xAI would open-source Grok at a time when he is embroiled in a bitter legal battle with rival firm OpenAI, which he was previously associated with as a co-founder.

Musk, 52, walked away to take a different direction in AI development and now he has sued the maker of ChatGPT and its incumbent CEO, Sam Altman, for allegedly reneging on its original purpose to build AI for the benefit of humanity and not for the pursuit of profit.

In response, OpenAI has rebuked the legal action, stating it is based on “convoluted – often incoherent – factual premises”, in a court filing as part of the counterclaim.

An official blog post that accompanied an email to company employees appeared to dismiss the early influence and impact of Musk on Open AI. 

The firm implored it has stayed true to its mission “to ensure AGI benefits all of humanity, which means both building safe and beneficial AGI”, “every step of the way”.

It was penned collectively by Altman, Greg Brockman, John Schulman, Ilya Sutskever, and Wojciech Zaremba, and detailed how the firm had raised less than $45 million from Musk, despite his pledge to bring in up to $1 billion in funding.

Image source: x.ai

The post Elon Musk’s xAI open-sources base model of Grok appeared first on ReadWrite.

]]>
Pexels
What is DarwinAI, Apple’s newly acquired start-up? https://readwrite.com/what-is-darwinai-apples-newly-acquired-start-up/ Mon, 18 Mar 2024 13:03:24 +0000 https://readwrite.com/?p=261561 Apple with artificial intelligence

Apple quietly acquired Canadian artificial intelligence startup DarwinAI earlier this year, adding dozens of the company’s staffers to its AI… Continue reading What is DarwinAI, Apple’s newly acquired start-up?

The post What is DarwinAI, Apple’s newly acquired start-up? appeared first on ReadWrite.

]]>
Apple with artificial intelligence

Apple quietly acquired Canadian artificial intelligence startup DarwinAI earlier this year, adding dozens of the company’s staffers to its AI division.

Bloomberg News reported the under-the-radar purchase on Thursday (Mar. 14) with Apple yet to publicly announce the news. They did tell the US news outlet that it “buys smaller technology companies from time to time.”

One of the big players in the Ontario-based company, Alexander Wong, has joined Apple as a director in its AI group as part of the deal. He helped to build the startup and works as an AI researcher at the University of Waterloo.

DarwinAI is the latest in a long list of similar acquisitions as Apple reportedly purchased 32 AI startups last year.

What is DarwinAI?

DarwinAI is best known for developing AI technology that can visually inspect components during the manufacturing process. They’re said to have served customers in a range of industries.

Founded in 2017, the company said its initial focus was on disrupting ‘the electronics manufacturing industry by improving the efficiency of Printed Circuit Board Assembly production through their ground-breaking technology.’

In 2022, they raised more than $15 million across six funding rounds. They received investment from 12 investors, including the likes of Honeywell Ventures and Inovia Capital, amongst other venture capital firms.

In a press release following the announcement of a funding round in December 2022, one of the co-founders said the “transformation is just beginning,” as they planned to raise money to help continue building advanced features.

Why did Apple buy DarwinAI?

It’s no secret that Apple is currently trailing behind other industry titans who have already made the steps into the world of AI.

The iPhone maker is now playing catch-up, with Chief Executive Officer Tim Cook promising that Apple will “break new ground” in AI this year. And the iPhone could soon have Google’s Gemini artificial intelligence (AI) to power new features, following chats between Apple and the search giant.

Its acquisition is likely in line with their focus and attention being on artificial intelligence, to keep up with others in the industry. 

In September of 2023, it was reported by The Information that the multinational corporation is investing millions of dollars per day into artificial intelligence, with work ongoing across multiple AI models over several teams.

Extensive work seems to have been carried out in the background at Apple in terms of bringing AI to the forefront, with an announcement expected around June time ahead of iOS 18 in September. 

Featured Image: Via Ideogram

The post What is DarwinAI, Apple’s newly acquired start-up? appeared first on ReadWrite.

]]>
Pexels
Hackers can read your encrypted AI-assistant chats https://readwrite.com/hackers-can-read-your-encrypted-ai-assistant-chats/ Sat, 16 Mar 2024 11:07:17 +0000 https://readwrite.com/?p=261460 A suspenseful image of a skilled hacker hunched over a computer screen, fingers flying across the keyboard. The room is dimly lit, with various monitors, cables, and computer components scattered around. A neon glow emanates from the keyboard and screen, casting an eerie light on the hacker's determined face. The background shows a cityscape with a skyline of futuristic skyscrapers, reflecting the hacker's global reach. Model

Researchers at Ben-Gurion University have discovered a vulnerability in cloud-based AI assistants like Chat GTP. The vulnerability, according to researchers,… Continue reading Hackers can read your encrypted AI-assistant chats

The post Hackers can read your encrypted AI-assistant chats appeared first on ReadWrite.

]]>
A suspenseful image of a skilled hacker hunched over a computer screen, fingers flying across the keyboard. The room is dimly lit, with various monitors, cables, and computer components scattered around. A neon glow emanates from the keyboard and screen, casting an eerie light on the hacker's determined face. The background shows a cityscape with a skyline of futuristic skyscrapers, reflecting the hacker's global reach. Model

Researchers at Ben-Gurion University have discovered a vulnerability in cloud-based AI assistants like Chat GTP. The vulnerability, according to researchers, means that hackers are able to intercept and decrypt conversations between people and these AI assistants.

The researchers found that chatbots such as Chat-GPT send responses in small tokens broken into little parts in order to speed up the encryption process. But by doing this, the tokens can be intercepted by hackers. These hackers in turn can analyze the length, size, and sequence of these tokens in order to decrypt their responses.

“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab, told ArsTechnica in an email

“This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or the client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

“Our investigation into the network traffic of several prominent AI assistant services uncovered this vulnerability across multiple platforms, including Microsoft Bing AI (Copilot) and OpenAI’s ChatGPT-4. We conducted a thorough evaluation of our inference attack on GPT-4 and validated the attack by successfully deciphering responses from four different services from OpenAI and Microsoft.

According to these researchers, there are two main solutions: either stop sending tokens one by one or make tokens as large as possible by “padding” them to the length of the largest possible packet, which, reportedly, will make these tokens harder to analyze.

Featured image: Image generated by Ideogram

The post Hackers can read your encrypted AI-assistant chats appeared first on ReadWrite.

]]>
Pexels
AI speeding up oil extraction and boosting US crude output https://readwrite.com/ai-oil-drilling-crude-supply/ Fri, 15 Mar 2024 15:03:22 +0000 https://readwrite.com/?p=261122 a lone oil rig at night on a dark choppy sea with an American flag coming out of it, cinematic.

A new report has highlighted all the different ways artificial intelligence is revolutionizing the oil industry For offshore drillers like… Continue reading AI speeding up oil extraction and boosting US crude output

The post AI speeding up oil extraction and boosting US crude output appeared first on ReadWrite.

]]>
a lone oil rig at night on a dark choppy sea with an American flag coming out of it, cinematic.

A new report has highlighted all the different ways artificial intelligence is revolutionizing the oil industry

For offshore drillers like Nabors Industry Ltd, artificial intelligence (AI) is transformative. AI from Houston-based software developers Corva LLC is able to autonomously control a drilling rig, which helps to increase drilling speed by at least 30% while also significantly reducing the volume of commands needed to operate the rig.  

“This is all automated — the driller doesn’t have to press anything,” Rafael Guedes, the company’s director of performance tools, told Bloomberg. “Now you can use your brain power for something else.”

According to the outlet, companies like Nabors Industries Ltd, which are utilizing AI in order to become more efficient, save between 25 and 50% on costs.

Bloomberg report that AI is also being tested in offshore oil fields, with companies like SLB reporting significant time savings. Jesus Lamas, president of SLB’s well construction unit, told Bloomberg that in the next 3-5 years, all their wells will be autonomously controlled by AI.“We need to do something different,” Lamas said. “We need to lower the cost of a barrel, we need to increase efficiency and we need to decrease the CO2 emissions per barrel.”

‘A lean operator’

 Another company that spoke to Bloomberg is Hilcorp Energy Co, a private oil and gas producer based in the US. Machine learning is utilized at this company by predicting equipment failures before they happen, which in turn prevents company downtime. The company estimates that it can prevent approximately half a billion cubic feet of gas production from going off the line by using machine learning to predict equipment failures.

“We always want to be a lean operator,” Lisa Helper, a geologist, said at a company conference quoted by Bloomberg. “Utilizing AI and machine learning in the field, in the office, then eventually through subsurface analysis has enabled us to keep a very tight, optimal workforce.”

Featured Image: Photo by Jan-Rune Smenes Reite via Pexels

The post AI speeding up oil extraction and boosting US crude output appeared first on ReadWrite.

]]>
Pexels
Generative AI is making games in six hours, claims Cosmic Lounge https://readwrite.com/generative-ai-is-making-games-in-six-hours-claims-cosmic-lounge/ Fri, 15 Mar 2024 11:10:09 +0000 https://readwrite.com/?p=261035 A futuristic 3D render of a sophisticated AI system, surrounded by a virtual gaming environment. The AI is developing video games autonomously, with a stream of code and data flowing from its central core. A monitor displays a sneak peek of one of the games in development, featuring vibrant graphics and a captivating storyline., 3d render

According to a talk Cosmic Lounge’s cofounder Tomi Huttula gave at Think Games 2024, his studio has developed a generative… Continue reading Generative AI is making games in six hours, claims Cosmic Lounge

The post Generative AI is making games in six hours, claims Cosmic Lounge appeared first on ReadWrite.

]]>
A futuristic 3D render of a sophisticated AI system, surrounded by a virtual gaming environment. The AI is developing video games autonomously, with a stream of code and data flowing from its central core. A monitor displays a sneak peek of one of the games in development, featuring vibrant graphics and a captivating storyline., 3d render

According to a talk Cosmic Lounge’s cofounder Tomi Huttula gave at Think Games 2024, his studio has developed a generative artificial intelligence (AI) tool that can prototype games in “five to six hours.”

“Yes, AI is really changing how games are developed and it is a big part of the future of game development,” he said during his talk. “My recommendation to everybody here is that you should be getting your teams and technology ready for AI.”

During the talk, Huttula demonstrated Cosmic Lounge’s in-house tool, Puzzle Engine, which uses dropdown menus and prompts to create a variety of game elements from puzzle engines and game logic to art and levels. He showed off ‘Angry Dev’, a puzzle game prototype created using Puzzle Engine in five to six hours.

“Creative people have a lot of game ideas. The challenge, of course, always has been: how do you know if your idea is good?” said Huttula. “Typically within companies, a designer has an idea, but they need an artist and an engineer to try the idea. So we want it to be super easy for designers – they can come up with ideas to create a prototype, just with Puzzle Engine, without an artist or an engineer.”

He described how Cosmic Lounge’s tech can generate vast quantities of levels, and then play through those levels for testing. It can then provide feedback on important points such as difficulty, possible churn points, and even potential areas for monetization. The human designers, which the studio only has two of, can edit and tweak based on the AI feedback.

Generative AI in an industry scarred by layoffs

“We don’t think that AI is doing anybody’s job,” said Huttula. “This is not replacing somebody…we are thinking about AI as helping our team members to do their job better, be more productive, and get better results.”

However, this might not be a stance shared with many game developers. The industry has been dominated by layoffs for the better part of a year despite soaring profits. It is hard to hear ‘AI is generating and testing levels’ and ‘AI isn’t taking any jobs’ in the same talk without it sticking in the throat. The games industry is grappling with how generative AI will affect it, and with large company bosses such as Square Enix’s President Takashi Kiryu and EA CEO Andrew Wilson boarding the AI hype train, the mood is uneasy.

Valve has introduced new processes for developers to disclose how AI is used in their games. Several users have taken to X to express their unease, with one saying they won’t touch any game made with generative AI. This is a complex, multifaceted issue that the games industry will be confronting for months and years to come.

Featured image credit: Ideogram

The post Generative AI is making games in six hours, claims Cosmic Lounge appeared first on ReadWrite.

]]>
Pexels