On June 10 at WWDC 2024, Apple announced its commitment to artificial intelligence (AI). The firm led by Tim Cook presented Apple Intelligencea system that will land on iOS 18, iPadOS 18 and macOS 15 Sequoia to allow users to rewrite texts and generate images of different styles, although photorealism will be left out of the equation for security reasons.
It’s no secret that many people – and a section of the tech industry – have been eagerly awaiting this move from Apple. After all, some of the biggest companies on the planet have already shown their cards in the AI game. The announcement, as expected, generated mixed opinions, but the discontent of some has not gone unnoticed.
Apple’s creative users want answers
Engadget points out that certain creative users of the apple firm are upset with the “lack of transparency” which involves the operation of Apple Intelligence. The crux of the matter is the scant official information available about the data used by Apple to train its artificial intelligence models, a scenario that, it should be noted, could be described as the common denominator of the industry.
As we know, AI models need to be “fed” with a huge amount of data. This data, in general terms, is the source of information through which they can learn about any topic and then provide answers. Apple says that it has trained its AI models with AppleBot. We are basically talking about a system of web scraping that “scrapes” the web to extract information.
The firm says it takes data from the “open web,” though it has not offered any details on this. Several questions arise here, for example, whether any content is included protected by copyright or there are filters. In any case, Apple has said that its bot strictly complies with the directives of the robots.txt files to not include certain pages in its scraping activity to boost its generative AI.
Apple has been quick to explain in detail its efforts to protect the privacy and security of users who use both its artificial intelligence system and that of OpenAI through ChatGPT. However, creatives such as photographers, designers, writers, among others, suspect that some of their work may have been included in Apple Intelligence training without their consent.
Amazon to invest $100 billion over next decade, WSJ says. All to win AI battle ” width=”375″ height=”142″ src=”https://i.blogs.es/53faf6/aws/375_142.jpeg”/>
As we say, Apple’s case is not unique. In the AI industry there are many other examples of companies that They are reluctant when it comes to training their models. If we focus on OpenAI, we do not know for sure what data GPT-4 and Sora have been trained with. In the case of Stable Diffusion, we know that it has been trained with data sets collected by the non-profit organization LAION.
Images | Apple
At WorldOfSoftware | It was only a matter of time before someone sued a creative AI for violating intellectual property
At WorldOfSoftware | AI models are being trained with photos of children. And it doesn’t matter if parents try to avoid it