Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

KI-CHAMPIONS Community

7.6k members • Free

Data Alchemy

38k members • Free

14 contributions to Data Alchemy
Curious to hear your stories! šŸ™Œ
I'm super inspired by everything going on here and just getting started with Data Alchemy. What really excites me is hearing about real transformations—has anyone here started completely from scratch with data and AI, went all-in with the program, and is now working as a freelancer or landed a great job because of it? Would love to hear your journey—what worked, what was challenging, and where you are now. Those stories are the ultimate motivation! šŸš€
0
0
Tools outside the Big Tech
A few years ago, in a course about privacy and data management, they told us about that site (https://prism-break.org/en/) where there are a bunch of alternatives to big tech companies that tend to concentrate in groups and base their business model on data trade rather than privacy. Now that I'm thinking a bit more about it, I'm starting to consider some of the alternatives mentioned there. I'd like to hear your thoughts. Have you ever considered alternatives to the mainstream options? Did you ever tried any? How was your experience?
2 likes • Jun 3
Yes — I've explored a few alternatives from Prism-Break over the years, like ProtonMail and Signal. It’s definitely eye-opening once you start noticing how deep the data trade model goes. That said, convenience and compatibility are still big hurdles. Curious: has anyone here found solid open-source alternatives that actually stick long-term in your workflow?
2 likes • Jun 4
@Oriol Fort Good point – and yes, I’m still using Signal regularly. As for ProtonMail: I used it for a while and really liked the privacy-first approach, but ended up switching back because of syncing issues across devices and occasional trouble integrating it with other tools I rely on (e.g., calendar workflows or third-party apps). Totally agree with you though – paying for services that respect privacy feels like the right way forward. Haven’t tried Kolab yet, but the Swiss angle is definitely a plus. Let me know how your experience goes if you give it a shot!
Welcome to Data Alchemy - Start Here
The goal of this group is to help you navigate the complex and rapidly evolving world of data science and artificial intelligence. This is your hub to stay up-to-date on the latest trends, learn specialized skills to turn raw data into valuable insights, connect with a community of like-minded individuals, and ultimately, become a Data Alchemist. Together, let's decode the language of data and shape a future where knowledge and community illuminate our way. Rules - Don't sell anything here or use Data Alchemy as any kind of funnel - We delete low effort community posts, and posts with poor English. Proofread your post first. - Help us make the posts high quality. If you see a low quality post, then click on the 3 dots on the post and "Report To Admins". Start by checking out these links - Classroom - Introduction - Roadmap - Contribution Be Aware of Scammers - Please be aware that this is a public group. Unfortunately, some people abuse the Skool platform to send DMs or post comments to trick people. This is the internet, so always do your own due diligence. Never automatically trust someone here on the Skool platform other than @Dave Ebbelaar's official account. To kick things off, please comment below, introducing yourself. Let us know: 1. Your name and where you're from 2. What project(s) you're currently focused on See you in the comments!
Welcome to Data Alchemy - Start Here
3 likes • Jun 3
Hey everyone, I'm Marvin from Germany. I’ve been diving into AI over the past months and — to be honest — tried to get started with everything at once. Currently focusing on building up practical GenAI skills and experimenting with real-world use cases. Excited to learn, share and connect with others on the same path. Let’s turn some data into gold 🧪✨
1 like • Jun 4
@Pierre-Henry Isidor Thanks a lot for the warm welcome and the thoughtful resources! šŸ™Œ Really appreciate the effort you put into supporting newcomers – I’ll definitely check both links out. Always up for learning and exchanging ideas, especially when it comes to Python and staying motivated. šŸ’” Looking forward to growing together with the community!
Building agents with Google Gemini and open source frameworks
This post helps you understand how to build AI agents with Google Gemini models using popular open-source frameworks, including LangGraph, CrewAI, LlamaIndex, or Composio. We touch upon how each framework leverages their strengths for different scenarios. https://developers.googleblog.com/en/building-agents-google-gemini-open-source-frameworks/
1 like • Jun 3
Super helpful overview — great to see Google embracing open ecosystems with Gemini. I’ve been experimenting with CrewAI and LangGraph lately, and the idea of combining them with Gemini’s strengths is really exciting. Has anyone tried chaining multiple frameworks together for more complex agent behavior? Would love to hear how that went!
LLM Post-Training: A Deep Dive into Reasoning Large Language Models
Post-training is where the magic happens... šŸ˜‰ " Large Language Models (LLMs) have transformed the natural language processing landscape and brought to life diverse applications. Pretraining on vast web-scale data has laid the foundation for these models, yet the research community is now increasingly shifting focus toward post-training techniques to achieve further breakthroughs. While pretraining provides a broad linguistic foundation, post-training methods enable LLMs to refine their knowledge, improve reasoning, enhance factual accuracy, and align more effectively with user intents and ethical considerations. Fine-tuning, reinforcement learning, and test-time scaling have emerged as critical strategies for optimizing LLMs performance, ensuring robustness, and improving adaptability across various real-world tasks. This survey provides a systematic exploration of post-training methodologies, analyzing their role in refining LLMs beyond pretraining, addressing key challenges such as catastrophic forgetting, reward hacking, and inference-time trade-offs. We highlight emerging directions in model alignment, scalable adaptation, and inference-time reasoning, and outline future research directions. We also provide a public repository to continually track developments in this fast-evolving field. " https://arxiv.org/abs/2502.21321 https://github.com/mbzuai-oryx/Awesome-LLM-Post-training
1 like • Jun 3
Incredibly relevant topic — post-training really is where models move from ā€œimpressiveā€ to ā€œuseful.ā€ The balance between alignment and capability is such a nuanced challenge. Curious to see how approaches like test-time scaling evolve in the next year. Thanks for sharing the paper and repo — looks like a goldmine for anyone deep in the LLM space!
1-10 of 14
Marvin Diebel
3
40points to level up
@marvin-diebel-2479
I am just an interested Beginner in AI

Active 9d ago
Joined May 20, 2025
Powered by