By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: I Traded My Sketchpad for a Prompt Box—And Art Will Never Be the Same | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > I Traded My Sketchpad for a Prompt Box—And Art Will Never Be the Same | HackerNoon
Computing

I Traded My Sketchpad for a Prompt Box—And Art Will Never Be the Same | HackerNoon

News Room
Last updated: 2025/06/18 at 1:16 AM
News Room Published 18 June 2025
Share
SHARE

TL;DR: Generative AI models like DALL·E are reshaping digital art – enabling instant image generation from text. This article explores how it works and what it means for the future of creativity.

Just a few years ago, creating digital art required a mastery of complex tools, expensive equipment, and weeks – if not months – of practice. Today, with a simple sentence and a few seconds, artificial intelligence can conjure a painting that once took an artist days to create. But what have we truly gained, and what might we have left behind? And perhaps most critically – does it render the artist obsolete?

From Brushstrokes to Pixels: The Traditional Digital Artist’s Journey

Becoming a digital artist demands quite a lot – far more than meets the eye.

As an artist, the transition to digital creation is neither seamless, nor cheap. It begins with choosing the right hardware, a decision complicated by countless variables: operating systems, device comfort, screen size, pen sensitivity, price, and more.

Next comes the platform. Whether you’re on a tablet or a computer, the number of applications available can feel overwhelming. From Photoshop to Procreate, Clip Studio to Corel Painter, the choices are vast – and each comes with a steep learning curve as the variety of options and abilities in each is immense.

Mastering the software is only half the battle. One must also learn to translate traditional skills into the digital realm. Colour theory, composition, and brushwork remain foundational, but the tactile experience of graphite or watercolour doesn’t intuitively carry over to an Apple Pencil or stylus. Even seasoned artists face a period of unlearning and retraining.

And for newcomers, the journey is even steeper. The digital world doesn’t bypass the need for artistic fundamentals – it simply reshapes the way they’re learned. Understanding form, light, depth, and style remains just as essential, making the learning process longer and more layered for those starting from scratch.

Whether you’re an experienced painter or a complete beginner, the transition into digital art is anything but quick. For professionals, it means adapting years of muscle memory to new tools and workflows. For newcomers, it involves building foundational art skills from the ground up. In both cases, the path is long – filled with trial and error, endless hours of practice, and a deep well of patience and dedication. The journey can take months – just to render something as deceptively simple as an apple, a cat, or a chair.

Then Came AI: A Paradigm Shift in Creative Process

Fortunately – or perhaps inevitably – this landscape has changed.

With the rise of artificial intelligence, the time-consuming initiation into digital art is no longer a necessity. Today, AI models like OpenAI’s DALL·E can generate art with just a prompt. Type a sentence, and a digital masterpiece materialises in seconds.

The art world, like nearly every other domain, has been shaken by the capabilities of AI. From GPT models answering questions and fixing code, to DALL·E generating illustrations, the creative process has been transformed.

But how does it actually work? And is it as magical as it seems?

Learning Like a Human: The Foundation of AI Art

To understand how AI can generate images, we must first understand how it learns. The concept is surprisingly human.

Imagine a toddler – let’s call him Oliver – learning to identify animals. His mother points to a black cat and says, “That’s a cat.” , she points to a ginger cat and repeats the word. Oliver, applying what he’s learned, then sees a white cat and proclaims, “Cat!”

Despite the new colour, Oliver recognises shared features: four legs, whiskers, pointed ears, a tail, and a meow. This is clustering: the brain’s ability to identify patterns from limited data.

But mistakes happen. One day, Oliver sees a Shih-Tzu dog and calls it a cat. It’s his best guess, based on the information he has. His mother corrects him: “No, that’s a dog.” From that point forward, Oliver refines how he distinguishes cats from dogs in his mind.

This is the essence of how artificial neural networks learn.

Neural Networks: Digital Brains Built on Data

A neural network is a type of computer model inspired by how the human brain works. It’s built to recognise patterns and learn from experience by analysing large amounts of data, just like baby Oliver.

Neurons in human brain. Illustration created by the author using DALL-E 3.Neurons in human brain. Illustration created by the author using DALL-E 3.

Neurons in human brain. Illustration created by the author using DALL-E 3.

These networks are made up of layers of tiny processing units (called “nodes” or “synapses”) that are connected to each other – similar to how neurons connect in the brain. Each connection has a value that controls how strongly one node influences another. These values are called “weights”.

During training, the network adjusts these weights over time based on how well or poorly it performs – just like Oliver gradually learns to tell the difference between a cat and a dog by correcting his mistakes. This process helps the network get better at making accurate predictions.

The training process requires a dataset – a structured collection of information used to teach the model. For a language model, this typically means millions of sentences, each constructed from words, which in turn are built from letters of the alphabet. For a visual model, it means millions of images – each composed of objects, which themselves are formed from individual pixels.

The larger and more diverse the dataset, the more accurate the model becomes.

Inside the Neural Network: How Data Flows and Decisions Form

Let’s imagine a neural network as it is often visualised in modern diagrams: a vast graph made up of interconnected nodes. These nodes are organised into distinct layers – stretching from top to bottom like the tiers of a layered circuit. Each node, or artificial neuron, plays a role in processing information.

This overall structure is known as the topology of the network. Topology defines how many layers the network contains, how many nodes exist in each layer, and how data flows between them. It’s the architectural blueprint of the model.

Training begins by feeding input data – such as images or sentences – into the first layer of the network. This data then moves forward through the network, layer by layer. At each stage, nodes apply mathematical operations to the data, such as matrix multiplications, activation functions like ReLU (Rectified Linear Unit), or other transformations.

As this process unfolds, a computation graph is constructed in the background. This graph meticulously records each operation and the flow of data: which node performed what transformation. Think of it as a detailed recipe or map of the model’s thought process.

This computation graph is essential – not just for making predictions, but also for learning from mistakes. When the model produces an incorrect result (prediction), the graph allows the system to trace back through each step and adjust all weights. This backward path is what sets the stage for backpropagation – the core mechanism through which neural networks improve over time.

Backpropagation: Learning by Error

Let’s go back to Oliver.

When he mislabels a dog as a cat, his mom corrects him and Oliver is able to readjust his understanding.

After a network makes a prediction, the result is compared to the true value using a loss function. The loss function measures how far from the truth the prediction was from the actual result.

In the process of backpropagation, the layers of the graph are traversed backwards in order to calculate how much each weight in the network contributed to the error. By using the chain rule from calculus, gradients are being computed. Each gradient describes the loss with respect to each weight. These calculated gradients allow us to indicate the way to fix the mistake by telling the direction and magnitude of change required to reduce the error. Using these gradients, the weights are updated accordingly to become more precise.

This feedback loop – forward pass, error calculation, backward pass – is repeated over and over. The result? A trained model capable of identifying patterns and making increasingly accurate predictions.

Teaching Oliver to Draw: The Power of Generative Models

Now that we’ve explored how predictions are made using predictive models, we can begin to understand the more intricate workings of generative models. As their names suggest, predictive models are designed to analyse existing data and make informed estimates about likely outcomes – much like young Oliver, who, after studying animals, can confidently predict the type of a new one he encounters.

But imagine asking Oliver not to identify a cat, but to draw one. This task requires more than recognition – it calls for creation.

We assume he doesn’t have a cat in front of him, nor can he recall every precise detail of the cats he’s seen. Yet, he is now asked to conjure an entirely new image of a cat based on memory, imagination, and learned patterns. This act of constructing something new – rather than selecting from what already exists – is the essence of generative models

Generative models, such as DALL·E, go beyond pattern recognition. They are trained to produce original content that resembles what they’ve encountered during training. Rather than merely answering, “What is this?”, they respond to the question, “What might this look like if it existed?” These models don’t just understand data – they create with it, generating entirely new images, text, audio, or video that align with the structures and styles they’ve learned.

When Language Takes Shape

Generative models like DALL·E are trained on vast datasets of image-text pairs, learning to associate visual elements with language. During training, the model sees an image alongside its caption and gradually learns which words correspond to which shapes, textures, colours, and concepts.

It builds an internal map of meaning, understanding that “a red apple” implies roundness, a specific hue, a stem, and so on. , when given a new text prompt, the model converts the words into a structured representation and uses that as a guide to generate an image – starting from random noise and refining it step by step until a coherent visual emerges that matches the text.

This process allows the model to create entirely new images it has never seen before, while still staying true to the patterns it learned during training.

A Thousand Apples a Second: What AI Sees That Artists Can’t

So how does this apply to art?

Let’s say you want to create a digital painting of an apple using DALL·E. You simply type your request – and within seconds, you receive an image.

For the prompt “generate a digital painting of an apple” DALL-E 3 responded with the following image:

Illustration created by the author using DALL-E 3. Used prompt: “Generate a digital painting of an apple”.Illustration created by the author using DALL-E 3. Used prompt: “Generate a digital painting of an apple”.

That apple is the product of millions of images the model has seen during training.

In contrast, a human artist would begin by seeking inspiration – studying references by browsing specialised platforms like Pinterest, sketching rough shapes, refining detail, experimenting with colours, and applying texture. This process can take any time between hours to days.

This process of browsing, collecting references, and closely observing objects – like apples – is an essential part of any artist’s workflow. It’s how they build a visual library in their mind: examining form, texture, lighting, colour variations, and stylistic choices. In many ways, this mirrors how an AI model is trained. Before it can generate images, the model must also be exposed to thousands – often millions – of examples. These examples, compiled into a dataset, serve the same purpose: to teach the model what an apple looks like from various angles, in different styles, and under varying lighting conditions.

But there are some crucial differences such as: scale and speed.

While a human artist relies solely on their own memory, experience, and ability to process inspiration over time, AI models are trained using vast computational resources. Large-scale models like DALL·E are trained in powerful data-centers equipped with thousands of interconnected GPUs, TPUs or in the case of my company – Accelerators specialised for training (such as Gaudi3). These machines work in parallel, processing and analysing massive volumes of images at incredible speeds. High bendwith network connections between machines, high-throughput storage systems, and specialised AI hardware enable these models to be trained on enormous datasets in days or weeks – what could take a human years to absorb, if ever.

In contrast, the artist’s brain is the only “hardware” available. No high-speed clusters or petabytes of image data – just intuition, memory, and practice. It’s this human limitation that AI bypasses, enabling it to “see” more examples, more variations, and more styles than a single person ever could in a lifetime.

Matching Human Style: Mimicking Mediums

Digital artists today aren’t limited to one visual style. With tools like Procreate, they can simulate oil, watercolour, pencil, and ink – complete with paper textures and brush dynamics.

Want to mimic the blotchy softness of watercolour on rough paper? There’s a brush for that. Prefer the dense richness of oil on canvas? That too.

AI can mimic these styles too – if prompted correctly. For example:

Illustration created by the author using DALL-E 3. Used prompt: “Generate a digital painting of an apple in watercolour style”.Illustration created by the author using DALL-E 3. Used prompt: “Generate a digital painting of an apple in watercolour style”.

Illustration created by the author using DALL-E 3. Used prompt: “Generate a digital painting of an apple in oil style”.Illustration created by the author using DALL-E 3. Used prompt: “Generate a digital painting of an apple in oil style”.

Illustration created by the author using DALL-E 3. Used prompt: “Generate a digital painting of an apple in pencil sketch style”.Illustration created by the author using DALL-E 3. Used prompt: “Generate a digital painting of an apple in pencil sketch style”.

Each of these prompts instructs the model to mimic not only the subject but also the medium, colour tone, and artistic texture.

The Artistic Process

To create those same effects without AI, an artist must gather reference material, build sketches layer by layer, experiment with brush settings, apply base colours, add highlights and shadows, and adjust textures manually.

It’s a time-intensive but emotionally rich experience.

With AI, the process becomes more immediate – but also more detached.

Let’s retrace the steps of the creative journey I embarked on, one stage at a time.

Step 1: Rough digital sketch. Digital illustration cerated by the author.Step 1: Rough digital sketch. Digital illustration cerated by the author.

Step 2: Detailed digital sketch. Digital illustration cerated by the author.Step 2: Detailed digital sketch. Digital illustration cerated by the author.

Step 3: Basic colouring. Digital illustration cerated by the author.Step 3: Basic colouring. Digital illustration cerated by the author.

Step 4: Advanced colouring, texture and depth. Digital illustration created by the Author.Step 4: Advanced colouring, texture and depth. Digital illustration created by the Author.

The Ghost in the Gallery: Why AI Still Can’t Replace You

AI appears more than capable of replicating the artistic process – only faster and at scale. This raises a critical question: why would anyone choose traditional digital media anymore? Is there still room for authentic artistic expression and creativity, or has that pursuit become obsolete? And if human artists still have a place, can they surpass AI – and in what contexts might that be possible?

Let’s explore what happens when we try generating a prompt with more precise and demanding instructions:

In response to the prompt “Generate an apple with two leaves in watercolour style”, the model produces the following result:

Illustration created by the author using DALL-E 3. Used prompt: “Generate an apple with two leaves in watercolour style”.Illustration created by the author using DALL-E 3. Used prompt: “Generate an apple with two leaves in watercolour style”.

This is where the curtain lifts – and the cracks are revealed: when presented with a more refined and specific prompt – “Generate an apple with two leaves that face the same direction in watercolour style” – the result takes an intriguing and unforeseen turn.

Illustration created by the author using DALL-E 3. Used prompt: “Generate an apple with two leaves that face the same direction in watercolour style”.Illustration created by the author using DALL-E 3. Used prompt: “Generate an apple with two leaves that face the same direction in watercolour style”.

Does it look the same to you? I would have to agree.

But does it fulfil the requirement – a clear and straightforward one, at that? Absolutely not.

Could it simply not comprehend? Was the fault mine – was my prompt too vague?

To remove any ambiguity, I refined the request further:

“Generate an apple with two leaves where both leaves face to the left side in watercolour style”

Illustration created by the author using DALL-E 3. Used prompt: “Generate an apple with two leaves where both leaves face to the left side in watercolour style”.Illustration created by the author using DALL-E 3. Used prompt: “Generate an apple with two leaves where both leaves face to the left side in watercolour style”.

As seen above, the result diverged even further from the intended outcome.

This instance of miscommunication with the AI model is far from isolated. In fact, it feels like the more precisely one tries to guide it, the more elusive the desired outcome becomes – often resulting in frustration and wasted time.

What’s striking is that a request so simple a child could grasp it proved incomprehensible to the model.

Imagine a client offering a straightforward instruction to a human artist – only to be met with blank incomprehension, as if the sentence were spoken in a foreign tongue. In such a case, the client would undoubtedly take their business elsewhere. And in our scenario, it is the model that loses the commission.

Final Thoughts: The Art We Make Together

In the end, this is not a war between brush and code, but a dialogue. The machine offers speed, precision, and infinite variation; the artist brings emotion, intuition, and soul. One is not here to replace the other, but to expand what is possible.

We are fortunate to live in a time where imagination is no longer bound by the limits of our hands alone. The future of creation lies not in rivalry, but in harmony – where human spirit and artificial intelligence create side by side, each lending its own kind of magic.

About me

I am Maria Piterberg – an AI expert leading the Runtime software team at Habana Labs (Intel) and a semi-professional artist working across traditional and digital mediums. I specialise in large-scale AI training systems, including communication libraries (HCCL) and runtime optimisation. Bachelor of computer science.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Top Smartphones under Rs 50,000 in India (June 2025): Check List
Next Article Democrats demand details from Palantir on federal contracts after Social Security, IRS report
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Intel debuts Core Ultra 200V series processors, Lenovo to unveil new AI-capable devices later this week · TechNode
Computing
Follow the 10-3-2-1-0 Sleep Rule and Kick Those Sleepless Nights to the Curb
News
Huawei reveals name of first tri-fold phone: the Mate XT · TechNode
Computing
Government unveils Cyber Growth Action Plan – UKTN
News

You Might also Like

Computing

Intel debuts Core Ultra 200V series processors, Lenovo to unveil new AI-capable devices later this week · TechNode

1 Min Read
Computing

Huawei reveals name of first tri-fold phone: the Mate XT · TechNode

1 Min Read
Computing

Former VP of SAIC, a VW partner, arrested for taking bribes · TechNode

1 Min Read
Computing

Vs. Connecteam: Which Tool Is Best for Team Management?

27 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?