Generative AI Course

For creatives who want authorship over AI, not just output.

A 12-week studio for designers, architects, visualizers, and artists building real command of image generation, 3D workflows, motion, automation, and AI-assisted web creation.

12 Weeks
36 Guided Hours
Overview

Who it is for, and what it unlocks.

Audience

  • People who tried ComfyUI and felt lost or overwhelmed.
  • Total beginners to node-based systems who want a clear mental model.
  • Creatives moving beyond drag-and-drop AI tools into reusable pipelines.
  • Artists, makers, and automation-focused teams exploring reproducible image and video generation.
  • Architects, interior designers, landscape architects, graphic designers, 3D artists, product designers, freelancers, and studios.

Objectives

  • Transform sketches and 3D massing into high-fidelity renders.
  • Build reusable custom AI workflows for repeated design tasks.
  • Rapidly iterate across styles, materials, mood, and lighting.
  • Animate architectural spaces and create cinematic motion outputs.
  • From builder to expert mastering complex ComfyUI pipelines and automation systems.
  • Combine ComfyUI with Krita, plugins, and zero-code web or app development flows.
12
Weekly sessions in a structured progression.
36h
Total guided instruction across the course.
1
Final project pipeline connecting concepts to delivery.
Instructors

Learn from practitioners building real AI workflows.

Portrait of Mostafa Mohamed

Mostafa Mohamed

AI Specialist, 3D Visualizer

Mostafa Mohamed is an AI Workflow Specialist focused on building systems, design, and visual production workflows for companies, with full-time experience at Stylus. He brings 2 years of hands-on work with ComfyUI, open and closed generative AI tools, and has contributed to projects with DSC, Emaar Masr, and Any Design Studio. He was also one of the former founders of Any More, an AI lab.

Portrait of Mariam Adel

Mariam Adel

AI Workflow Specialist, Design Architect

Mariam Adel is an AI Systems and Workflow Specialist focused on developing AI-driven systems and design production workflows. She has professional experience as an AI Specialist at Emaar Misr and brings 3 years of hands-on expertise with ComfyUI, as well as local and closed-source generative AI tools. She combines practical industry experience with academic involvement, having previously served as a Teaching Assistant at Ain Shams University. Additionally, she has worked as a Project Design Coordinator at Diaa Consult.

Notes

Format, pace, and studio expectations.

Duration

12 weeks, 1 day per week, 3 hours per session.

Total Time

36 guided hours across a progressive hands-on curriculum.

Course Note

Topics and duration may be modified by the instructor based on participant knowledge and skill level.
Powered by MeeM Studio
Sessions

Twelve sessions from first principles to finished systems.

Session01

AI Foundations & ComfyUI setup

  • AI terminology and useful websites
  • What is ComfyUI?
  • Download and install ComfyUI
Session02

Comfy interface & Basic Workflows

  • Interface overview
  • Simple text-to-image workflow
  • Image-to-image workflow
Session03

Prompting & Models

  • Prompting instructions
  • Ollama, Florence, Qwen VL, AI Studio
  • Model types: SDXL, Flux, Qwen Image, Z-Image
Session04

ControlNet & LoRA Workflows

  • Overview of ControlNet and IP Adapter
  • Using LoRAs
  • LoRA training
Task: Train your own LoRA
Session05

Inpainting & Design Iteration

  • Inpainting workflows
  • Image editing models
  • Differences between normal and edit-specific inpainting
Task: Update materials, context, people, objects, and atmosphere
Session06

Enhancement Pipeline

  • Segmentation and autodetection
  • Enhancing full images and selected parts
  • Upscaling
Task: Execute a full image enhancement pipeline
Session07

Krita Integration

  • Photo manipulation in Photoshop
  • Intro to Krita and AI plugins
  • Generate, upscale, and organize presets
  • Insert ComfyUI workflows and custom parameters in Krita
Task: Deploy a custom ComfyUI workflow within Krita
Session08

3D Generation

  • 3D generation in Hunyuan
  • Trellis with textures
  • 3D models to segments
  • Advanced 3D workflows
Task: Construct a textured 3D asset from a 2D design
Session09

AI Video Generation

  • Intro to local AI video generation
  • Online video pipeline
Task: Generate a professional cinematic animation
Session10

LLMs, MCP & Automation

  • Anything LLM
  • MCP
  • Use ComfyUI as one tool among many
  • Website and app workflows with AI tools
  • Final project setup
Task: The final project
Session11

Vibe Coding, Websites & AI Agents

  • Intro to vibe coding
  • Skills and AI agents
  • Building websites and apps with AI coding tools
  • UI tools: OpenCode, Claude, VS Code, Antigravity, Codex
  • Project follow up
Task: Develop and launch a custom web application
Session12

Arch Viz & Interior AI

  • AI in architectural visualization techniques
  • AI in interior design techniques
  • Final project follow up
Task: Finalize the project
Powered by MeeM Studio
Showcase

Scroll through the outcomes this course is built to produce.

Multi angles

One scene, many camera decisions.

01

Temporary Unreal-style placeholder: one environment explored through hero, eye-level, overhead, and detail shots so the output reads like a deliberate sequence.

  • Wide, mid, and close storytelling frames.
  • Consistent atmosphere across all views.
  • Ready for final render replacements later.
Camera setSequencePlaceholder
Unreal-style preview
Angle pack for one concept scene.
Replace with storyboard stills, render crops, or viewport captures.
Suggested future content: one project presented from four curated viewpoints.
Mood board

Atmosphere before geometry.

02

A previsualization board for lighting, palette, texture, and emotional direction before the final scene gets locked.

  • Reference logic tied to one visual outcome.
  • Multiple moods from the same brief.
  • Strong bridge from inspiration to production.
Look devPaletteAtmosphere
Reference board
Palette, materials, tone, and lighting notes.
Ideal for a collage of references and early style frames.
Suggested future content: one mood board directly connected to the final render set.
Space Perception

Make scale and depth feel believable.

03

Show how camera height, lens feel, layering, and human-scale anchors transform flat outputs into convincing spaces.

  • Before-and-after perception fixes.
  • Lens, horizon, and parallax cues.
  • Useful for architecture and interiors.
DepthLens feelScale
Spatial study
Foreground, midground, and perspective correction.
Swap in final scene comparisons when ready.
Suggested future content: a perception correction sequence from flat image to immersive space.
Add people

Inject life without breaking the scene.

04

Context figures added with believable scale, gesture, and lighting so the environment feels inhabited and readable.

  • Occupancy and circulation cues.
  • Stronger understanding of function and scale.
  • Designed as communication, not decoration.
Human contextScene lifeScale
Population pass
Empty composition turned into a socially readable environment.
Best future slot for before-and-after comparisons.
Suggested future content: empty render transformed by contextual population.
3d integration

Blend generated imagery with structured 3D workflows.

05

A 3D base scene pushed into richer visual outcomes while preserving geometry, intent, and spatial control.

  • Geometry, segmentation, and AI enhancement together.
  • 2D and 3D as one workflow.
  • Applicable to architecture, products, and environments.
2D + 3DBridgeWorkflow
Integration sample
Simple mesh, stronger atmosphere, retained structure.
Use later for segmented views, texture passes, and finals.
Suggested future content: 3D base scene translated into a polished presentation frame.
Restore details

Recover resolution, edges, and surface fidelity.

06

Enhancement workflows for facades, materials, products, and close-up details after generation and upscaling.

  • Selective restoration instead of blunt sharpening.
  • Recover texture, edges, and legibility.
  • Quality-control pass before delivery.
EnhancementUpscaleDetail
Detail recovery
Blurred source refined into a presentation-ready image.
Ideal future placement for zoomed comparison crops.
Suggested future content: crops that prove restored material and edge quality.
Refine people

Fix faces, hands, posture, and integration.

07

Cleanup workflows for AI-generated characters so they hold up at closer viewing and feel naturally embedded in the final scene.

  • Anatomy and gesture correction.
  • Realism without losing mood.
  • Useful for hero stills and portfolio frames.
Human polishCleanupPost pass
Refinement panel
Close-range cleanup for faces, hands, and figure coherence.
Best future fit: side-by-side evidence of refinement quality.
Suggested future content: portrait or scene crops showing refinement stages.
Style transfer

Hold the structure, shift the visual language.

08

One brief interpreted through multiple aesthetics while the underlying composition stays coherent and recognizable.

  • Consistent structure across several looks.
  • Strategic variation, not random filtering.
  • Ready for later style comparison boards.
Look shiftVariantsReference style
Style matrix
One composition rendered through multiple visual identities.
Use later for controlled mood and style triptychs.
Suggested future content: one structure shown across several style directions.
Animation

Turn still concepts into motion stories.

09

Camera moves, parallax motion, and AI video sequences that feel cinematic enough for presentations and launch campaigns.

  • Still-to-motion workflow.
  • Atmosphere, timing, and camera path as design tools.
  • Ready for embedded video later.
MotionVideo pipelineCinematic
Animation preview
Poster frame for a generated motion sequence.
Later replace with a real clip or image sequence.
Suggested future content: one hero frame plus a short motion sample.
Training Lora

Train a visual behavior, not just a single image.

10

Custom LoRA training presented as a reusable system for repeating a style, identity, material logic, or project-specific direction.

  • Dataset thinking and training intent.
  • Repeated consistency across outputs.
  • Ready for charts and training notes later.
Custom modelConsistencyReusable style
LoRA training story
Dataset in, controlled visual signature out.
Use later for training samples and consistency checks.
Suggested future content: training inputs paired with output consistency.
Live sketch

Move from rough lines to developed imagery fast.

11

Sketch-to-image workflows where hand-drawn intent survives the jump into rendered atmosphere, materials, and composition.

  • Fast ideation without losing authorship.
  • AI as amplification of the designer’s linework.
  • Good fit for live workshop moments.
Sketch to imageFast iterationAuthorship
Live ideation
Concept sketch expanded into a cinematic design frame.
Best future use: rough sketch on one side, refined output on the other.
Suggested future content: a live sketch progression from linework to polished render.
All in one click

Bundle the workflow into a single repeatable action.

12

A packaged system where setup, prompting, enhancement, and export are orchestrated as one repeatable production flow.

  • Complex chains simplified for teams and studios.
  • Operational rather than experimental.
  • Ready for workflow diagrams or automation screenshots.
AutomationSystemizedProduction ready
Workflow package
One action orchestrating a full visual production pass.
Later replace with a node graph, UI screenshot, or export sheet.
Suggested future content: compressed workflow overview plus resulting outputs.
vibe coding

Translate the visual thinking into a working product.

13

The final slot extends beyond images into websites, tools, and product concepts built with AI coding flows and strong visual direction.

  • Connect visual prompts to real implementation.
  • AI-assisted product making as part of the same stack.
  • Ideal home for landing pages, tools, and microsites.
Web outputAI codingProduct thinking
Build preview
From concept language to a shipped interface direction.
Later place screenshots, product flows, or web demos built during the course.
Suggested future content: a live product or website case study built from the course workflow.
01
Multi angles
02
Mood board
03
Space Perception
04
Add people
05
3d integration
06
Restore details
07
Refine people
08
Style transfer
09
Animation
10
Training Lora
11
Live sketch
12
All in one click
13
vibe coding
Powered by MeeM Studio
Enroll

Reserve your seat.

Early Bird

Take your seat before the first 10 spots are gone.

15% Off
First 10 Seats
Join a hands-on cohort built for designers, architects, visualizers, and artists who want guided practice, repeatable systems, and outcomes worth shipping.
Round Date May 10, 2026
Duration 12 weeks
Guided Time 36 guided hours
Course Language Arabic
Practical workflow training Real-world project outcomes Serious cohort only
Seat Reservation

Secure the early cohort rate.

Standard pricing returns after the first 10 seats are filled.

6,799 EGP
7,999 EGP
  • 12 guided sessions across foundations, workflows, prompting, and production.
  • Hands-on coverage of ComfyUI, ControlNet, LoRA workflows, inpainting, and enhancement pipelines.
  • Practical learning path focused on real creative and architectural AI outputs.
Early enrollment pricing is reserved for the first ten participants.
Complete a short guided application and we will contact you on email or WhatsApp to confirm the next step.
Request Details
Questions before applying? Message us directly on WhatsApp.
Powered by MeeM Studio