3D renders and JSON data for NFTs straight from Blender by using Python.

8 min readJun 9, 2022


Blender allows you to run Python scripts directly inside the program, changing pretty much anything in your scene using code. I wanted to see if I could render out a series of images with different traits along with the corresponding JSON files so I could create an NFT series without having to manage layers. Turns out it’s not too difficult.

Blender render of a shelved dragon head project.

A few months ago I had an idea to create an NFT series of voxel dragon heads. I was using Blender to create the renders, but the idea got put on the back burner as other ideas took prescedent. I’m still not planning anything with the series, so I figured I would share how I was going to do it in case anyone wants to expand on the idea for their own project.

TLDR: I created a blender scene with items I was going to render. For each “trait” that would be listed in the NFT JSON data, I created a separate object with a specific name in Blender. I then used Python to run through a series of loops that would change the color of the object, update the name of the trait, render/save the image, and save the JSON file. The number of traits/images are NOT randomized. The next step for the artist would be to upload the data to long term storage and mint on the respective blockchain.

Here’s a Github repo with blend file and the Python script. The script is pretty heavily commented if you just want to dive in. The JSON structure in the script can be modified to accomodate any blockchain. The default is what you would need to mint through Frame It on Elrond.

Be warned! I am not a developer and have never been a professional developer, so my coding skills are unrefined and based on what I gleaned from the internet. Please don’t hold my coding skills against me! I am assuming you have a basic understanding of computer programming concepts like scope, functions, counters, strings, integers, etc. If not, you might want to read up on basic Python programming.

How NFTs are typically created (quick overview): If you’re familiar with NFTs and the collections available across multiple blockchains, you know they can sometimes number in the thousands, with each one being slightly different than the next. This is usually accomplished by the artist creating multiple layers, one for each differing trait in the series, and randomly stacking those layers to create multiple variations of the base character. There are a lot of articles and tools that have been created explicitly for this purpose so I won’t cover them here.

Why I didn’t want to use layers with 3D renders: At first, I figured since all of the tools to manage and stack 2D layers were readily available, I would just render my 3D character traits on separate layers and combine them like most other projects. The problem I ran into was, since I was hiding some objects, those hidden objects wouldn’t be casting shadows or reflecting light on the objects I was actually rendering.

For example, I was rendering dragons with horns. If I hide the horns so they can be on a separate layer, the shadows from the horns wouldn’t render on the head because it wasn’t visible in the scene. I also wanted the eyes to glow. But if I rendered the horns by themselves, the glowing light from the eyes wouldn’t be cast on the horns because the eyes and head were hidden. These are probably things most people using the images for a tiny profile picture (PFP) wouldn’t notice, but I’m kinda a stickler for details like that.

There are some tricks, techniques, and features you can use to get around this problem, but in my mind, I just wanted to render out a full image, with all of the objects and lights interacting with each other in the correct way. Plus, by using code, I wouldn’t have to worry about using a different program to process all of the layers. I could do it all in Blender with the images and JSON files packaged in one folder ready to upload to Arweave or IPFS and mint to the blockchain.

Scene setup: For my dragon head scene, I kept it very simple. I broke my model up into five separate objects, all using a basic Principled BSDF material. The five objects are, as you can see in the screenshot, Eyes, Horn Stripes, Horns, Skin, and Skin Stripes. Then there is one point light, and one camera. Like I said, very simple. For your project, you could use more objects, hide them, move them, scale them, all through code.

Screenshot of my scene.

Before you need to touch any Python code, make sure your scene renders exactly the way you want. For my objects, I was just changing the RGB values, so that’s what I needed to keep track of. These numbers are going to represent the color trait for each object, so I needed to make sure they were locked in and would produce an image I was happy with.

Write down your material RGB values.

At this point, your scene should be set up exactly how you want it, with all of the objects having unique names and materials.

Python, here we come: As I said before, I’m not a Python developer or seasoned programmer, but I did take a few programming classes in college 15–20 years ago before switching to an unrelated career. So if I use some terrible coding practices in my script, or do something that hasn’t been done in ages, feel free to take the code and do what you want with it. If you do modify and release it, I would appreciate a shoutout, but it’s not mandatory. It is all released under the MIT license.

Spying on Blenders back end: You might be thinking you’ll have to dig through the Blender API docs for specific Python commands if you want to modify the script, but you can actually see what commands blender is using in real time, as you’re creating. If you open a new scene, go to any of your panels, click the “Editor Type” dropdown, and select “Info” under the Scripting column. You can now watch what code/commands are used when you’re creating.

Change one panel to “Info”

Go and add a simple object. In my case, I added a sphere. Here’ s what the info panel spits out.

bpy.ops.mesh.primitive_uv_sphere_add(radius=1, enter_editmode=False, align='WORLD', location=(0, 0, 0), scale=(1, 1, 1))

That is the Python code you would use to create a default UV sphere. The items inside the parenthesis should be self explanatory.

Want to change an item material to all red? Use this.

bpy.data.materials["Material"].node_tree.nodes["Principled BSDF"].inputs[0].default_value = (1, 0, 0, 1)

This tells Blener the RGB and Alpha values of the material. In this case, red is 1, blue is 0, green is 0, and alpha is 1 (totally opaque).

No matter what you do in Blender, you’ll see its corresponding command in the Info pane. There are some nuances to the code and you might have to add a few things to make your script work, but using the data from the Info pane should get you through most of your custom coding needs.

How to use my script: I’m not going to go through my script here, as it is heavily commented to help with understanding. If you would like to give it a shot and watch it in action, download the RenderSeries.blend file and the RenderSeries.py file from the Github repo. The blend file was created using Blender 3.0.0. When you open it, the layout should have the Text Editor pane already open at the bottom.

Click the folder icon and navigate to the RenderSeries.py file. Before you can actually run it, you will need to modify the output path in the script. This is the location you want Blender to save the images and JSON data to.

Click the folder to find your python script.

WARNING!: Running the script as it is will create 576, 256x256 images around 110 KB each in size, and 576 JSON files each about 560 bytes in size. Depending on the speed of your computer, or your storage capacity, you may want to comment out some colors/options.

Save the script after your modifications, be sure to re-load the file in the Text Editor pane, and push the play button. Blender doesn’t really tell you anything, unless there is a problem, in which case it will error out and not run. If that happens, you’ll have to debug your changes and see where you went wrong. I didn’t have much success with debugging inside Blender because from what I’ve read, you have to have a console open to read what the errors are. If you need to go that route, good luck, I didn’t have a good experience with it.

All set up and ready to go: If you ended up with a bunch of .PNG and JSON files in one directory in sequential order, congratulations, you’re now ready to upload them to long term storage and mint them!

The easiest way I have found to create a large series is to use the smart contract minting options on Frame It which runs on the Elrond blockchain. I understand Elrond doesn’t have the same amount of adoption as some other chains, but I have had a really good experience using their tools and interacting with the community.

I minted a 500 piece series created using AI on Frame It and have nothing but good things to say about them. You can read about that HERE.

Thanks for taking the time to read this article and I hope it helps you in some way. I am by no means a Blender or Python expert, but I think these two files can provide a starting point for others who are creating a 3D NFT series.

Good luck, have fun, and never stop learning!