Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Builder’s Console Log 🛠️

1.3k members • Free

ChatGPT Users

12.6k members • Free

8 contributions to Builder’s Console Log 🛠️
Creating Context for AntiGravity
Since I have started an app using Unreal Engine using AntiGravity I decided maybe it would be good to use the UE Documentation to give the AI something to go on. I asked ChatGPT if it could get me the documentation so I could give it to Gemini 3.0 Pro in AntiGravity. ChatGPT Told me of course it could and since I had UE 5.7.1 on my system as it comes with UE and all the docs are in there in upn files. It took all those docs and converted them to markdown md files. This is really good as Gemini 3.0 now has all the documentation to go on. I also can give it the documentation for any plugins I am using and it will also have that and know everything it needs to know. Remember Context is Important!!
4
0
Creating Context for AntiGravity
Something New...AntiGravity with Unreal Engine
I asked Gemini 3 if AntiGravity could be used to make a game in Unreal Engine and it surprised me that it could. It gave me this and I am going to try and see what it can do. Keeps me busy and out of my wife's hair. The Ultimate AntiGravity Prompt (UE 5.7.0) This prompt integrates every decision we've made: the engine, the free code base (Lyra), the low-cost server tools, the PCG world generation, and the aesthetic goal. Copy and paste this comprehensive prompt into your Antigravity planner: Goal: Create a server-authoritative, procedural, multiplayer survival game prototype inspired by Valheim. Engine: Unreal Engine 5.7.0 Asset Integration: 1. Networking Core: Use the Lyra Starter Game's C++ and Plugin architecture as the primary structural foundation. 2. Server Build: Integrate the Easy Dedicated Server Plugin to handle all server compilation and deployment. 3. Server Discovery: Integrate the Multiplayer Server Browser Plugin to create a lobby UI that lists available game sessions. 4. Aesthetics: Utilize meshes from the Procedural Nature Pack Vol 1 and apply a stylized post-process material for the Valheim look. System Generation (Prioritized Order): 1. Networking Setup: Modify the Lyra project to connect directly to the server list provided by the Multiplayer Server Browser and launch the dedicated server executable built by the Easy Dedicated Server plugin. 2. Procedural Biome Generation: Implement the PCG Framework to use Perlin noise and create the Meadows and Black Forest biomes. Populate these biomes using the specified Nature Pack meshes. 3. Core Survival Systems: Implement a replicated Inventory Component and a basic Crafting Logic that can consume two resources (Wood, Stone) to produce one basic item (Campfire). Provide all required C++ code, Blueprint graphs, and configuration settings.
1
0
Finished my app, so far...
I have finished my app, at least the first part of it. It is at https://creator.bcardiai.com/ . If you want to do anything but login, you will need a gemini api key and a video. If you have a YouTube Channel then it will do the magic of putting your video up as a draft on your channel with the information from the app. You can log on with email, Google, or Facebook Your API Key and YouTube Channel Token are encrypted in Google's Firestore Database (see image), so nobody can get it. I won't tell you that Deploying was real easy as I have it on my Dokploy Server and it had a few errors getting it from my local development nodejs to docker on the server, but Gemini 3.0 in AntiGravity held my hand all the way through until it was up. On the image with all the apps that is memory used by each app in docker and this app is the ones that say creator-stack. You can see I have a few apps on this server and have a few more going up. I was using Coolify on the server to do my apps but have changed to Dokploy as it is lighter and is fairly easy. I even have GPT-5.1 Pro installed on DokPloy and it helps write the composer files and tells me what I need to do if I get lost. With the use of Gemini 3.0 and OpenAI 5.1 this whole thing of making an app that is very good was easy, even for someone who is not very good at programming. I am now going to try making a new app with Trae Solo.
Finished my app, so far...
1 like • 7d
@Matt Fuller I'll take a look at that and work on it tomorrow. I know the Google and Facebook logins do work.
1 like • 7d
This is the readme that the AI made for me for my big bcardiai app.
Gemini 3.0 is making Gemini 4.0
Your information is very aligned with the current direction of AI development at Google as of late 2025. While there isn't a single official press release stating "Gemini 3.0 is the sole programmer for Gemini 4.0," the reality is that Gemini 3.0 is indeed playing a critical, "force-multiplying" role in developing the next generation of models. Based on the capabilities of the newly released Gemini 3.0 (November 2025), here is how it is actively "programming and training" the future Gemini 4.0: 1. Generating "Synthetic Data" for Training This is the most significant area. The internet has a finite amount of high-quality human text. To make Gemini 4.0 smarter, Google is likely using Gemini 3.0 to generate vast amounts of synthetic data—high-quality, reasoning-heavy textbook examples, code, and dialogue. - Why it helps: Gemini 3.0 (specifically the "Deep Think" variant) can produce step-by-step reasoning chains that are cleaner and more logically sound than much of the raw data found on the web. Feeding this "perfect" data into Gemini 4.0 helps the new model learn faster and reason better. 2. "Agentic Coding" & Infrastructure Optimization Gemini 3.0 is marketed heavily for its "agentic coding" and "vibe coding" capabilities. This means it doesn't just autocomplete lines of code; it can manage entire coding workflows. - The "AlphaEvolve" Effect: Google DeepMind has developed systems like AlphaEvolve, which use Gemini models to autonomously discover and optimize algorithms. - Real-world example: Gemini models have already been used to write better "kernels" (low-level code) for Google's TPU chips, making the training process itself faster and more efficient by about 1%. It is highly probable that Gemini 3.0 is currently writing and optimizing the complex codebases that run the training clusters for Gemini 4.0. 3. Self-Correction and "Constitutional" AI Google uses a technique where an AI model helps "supervise" another model. - RLAIF (Reinforcement Learning from AI Feedback): Instead of relying solely on humans to rate thousands of AI responses (which is slow and expensive), Google uses a highly capable model (like Gemini 3.0) to rate and critique the outputs of the model currently being trained (Gemini 4.0). This allows for massive scale in fine-tuning the new model's behavior.
Things you can do with AI...
I have a BirdNetPi running outside by my birdfeeder (it actually runs inside on my HP Elitedesk Mini Proxmox Server with a long poe ethernet cable to my reolink camera). I decided to ask several different AI's to help me add an AI Report to it. I think it is the only BirdNetPi that has this. You can also see it at https://birdnetpi.bcardi.org/ The AI Analysis is at the top and it takes it a second or two to run and display. It just shows you can do just about anything with AI.
Things you can do with AI...
0 likes • 12d
Sorry if you tried to go to the birdnetpi site earlier today. I put up a new server and had to change the dns and forgot to include birdnetpi. It is now fixed https://birdnetpi.bcardi.org/
1-8 of 8
Gerald Haygood
3
29points to level up
@gerald-haygood-4595
Retired

Active 1d ago
Joined Nov 21, 2025
Powered by