- DevLog(블로그) 인프라: build-blog.js (MD→HTML), devlog.css, devlog.js - DevLog 목록/포스트 페이지 4개 언어 (ko/en/ja/zh) - 글 2편 작성 + 번역: 관성식vs광학식, 광학식 파이프라인 - 전체 네비게이션에 DevLog 탭 추가 (37+ HTML) - 메인 팝업(요금제 변경 안내) 제거 (ko/en/ja/zh) - i18n.js: 언어별 페이지에서 번역 JSON 항상 로드하도록 수정 - 방문자 싸인 이미지 3장 추가 (webp 변환) - sitemap, i18n JSON, package.json 업데이트 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
669 lines
45 KiB
HTML
669 lines
45 KiB
HTML
<!DOCTYPE html><html lang="en"><head>
|
||
<!-- Google Tag Manager -->
|
||
<script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
|
||
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
|
||
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
|
||
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
|
||
})(window,document,'script','dataLayer','GTM-PPTNN6WD');</script>
|
||
<!-- End Google Tag Manager -->
|
||
<script async src="https://www.googletagmanager.com/gtag/js?id=G-R0PBYHVQBS"></script>
|
||
<script>
|
||
window.dataLayer = window.dataLayer || [];
|
||
function gtag(){dataLayer.push(arguments);}
|
||
gtag('js', new Date());
|
||
gtag('config', 'G-R0PBYHVQBS');
|
||
</script>
|
||
<meta charset="UTF-8">
|
||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||
<title>Complete Anatomy of the Optical Motion Capture Pipeline — From Cameras to Motion Data - Mingle Studio DevLog</title>
|
||
|
||
<link rel="icon" type="image/x-icon" href="/images/logo/mingle-logo.ico">
|
||
<link rel="shortcut icon" href="/images/logo/mingle-logo.ico">
|
||
<link rel="icon" type="image/webp" href="/images/logo/mingle-logo.webp">
|
||
<link rel="apple-touch-icon" href="/images/logo/mingle-logo.webp">
|
||
|
||
<link rel="canonical" href="https://minglestudio.co.kr/en/devlog/optical-mocap-pipeline">
|
||
<meta name="theme-color" content="#ff8800">
|
||
|
||
<meta name="description" content="An in-depth guide to the entire optical motion capture technical pipeline. We cover camera installation, PoE networking, 2D centroids, calibration, 3D reconstruction, skeleton solving, post-processing, and on-set practical issues in 10 detailed steps.">
|
||
<meta name="author" content="Mingle Studio">
|
||
|
||
<meta property="og:title" content="Complete Anatomy of the Optical Motion Capture Pipeline — From Cameras to Motion Data">
|
||
<meta property="og:description" content="An in-depth guide to the entire optical motion capture technical pipeline. We cover camera installation, PoE networking, 2D centroids, calibration, 3D reconstruction, skeleton solving, post-processing, and on-set practical issues in 10 detailed steps.">
|
||
<meta property="og:url" content="https://minglestudio.co.kr/en/devlog/optical-mocap-pipeline">
|
||
<meta property="og:type" content="article">
|
||
<meta property="og:image" content="https://minglestudio.co.kr/blog/posts/optical-mocap-pipeline/images/thumbnail.webp">
|
||
<meta property="og:locale" content="en_US">
|
||
<meta property="og:site_name" content="Mingle Studio">
|
||
<meta property="article:published_time" content="2026-04-05">
|
||
|
||
<meta name="twitter:card" content="summary_large_image">
|
||
<meta name="twitter:title" content="Complete Anatomy of the Optical Motion Capture Pipeline — From Cameras to Motion Data">
|
||
<meta name="twitter:description" content="An in-depth guide to the entire optical motion capture technical pipeline. We cover camera installation, PoE networking, 2D centroids, calibration, 3D reconstruction, skeleton solving, post-processing, and on-set practical issues in 10 detailed steps.">
|
||
<meta name="twitter:image" content="https://minglestudio.co.kr/blog/posts/optical-mocap-pipeline/images/thumbnail.webp">
|
||
|
||
<link href="https://hangeul.pstatic.net/hangeul_static/css/nanum-square.css" rel="stylesheet">
|
||
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.5.0/css/all.min.css" rel="stylesheet">
|
||
<link rel="stylesheet" href="/css/common.css?v=20260404">
|
||
<link rel="stylesheet" href="/css/devlog.css?v=20260404">
|
||
|
||
<link rel="alternate" hreflang="ko" href="https://minglestudio.co.kr/devlog/optical-mocap-pipeline">
|
||
<link rel="alternate" hreflang="en" href="https://minglestudio.co.kr/en/devlog/optical-mocap-pipeline">
|
||
<link rel="alternate" hreflang="ja" href="https://minglestudio.co.kr/ja/devlog/optical-mocap-pipeline">
|
||
<link rel="alternate" hreflang="zh" href="https://minglestudio.co.kr/zh/devlog/optical-mocap-pipeline">
|
||
<link rel="alternate" hreflang="x-default" href="https://minglestudio.co.kr/devlog/optical-mocap-pipeline">
|
||
|
||
<script type="application/ld+json">
|
||
{
|
||
"@context": "https://schema.org",
|
||
"@type": "BlogPosting",
|
||
"headline": "Complete Anatomy of the Optical Motion Capture Pipeline — From Cameras to Motion Data",
|
||
"description": "An in-depth guide to the entire optical motion capture technical pipeline. We cover camera installation, PoE networking, 2D centroids, calibration, 3D reconstruction, skeleton solving, post-processing, and on-set practical issues in 10 detailed steps.",
|
||
"datePublished": "2026-04-05",
|
||
"author": { "@type": "Organization", "name": "Mingle Studio" },
|
||
"publisher": { "@type": "Organization", "name": "Mingle Studio" },
|
||
"url": "https://minglestudio.co.kr/en/devlog/optical-mocap-pipeline"
|
||
}
|
||
</script>
|
||
<script type="application/ld+json">
|
||
{
|
||
"@context": "https://schema.org",
|
||
"@type": "FAQPage",
|
||
"mainEntity": [
|
||
{
|
||
"@type": "Question",
|
||
"name": "How is an optical motion capture camera different from a regular camera?",
|
||
"acceptedAnswer": {
|
||
"@type": "Answer",
|
||
"text": "Regular cameras capture full-color video, but motion capture cameras are specialized for the infrared (IR) spectrum. They illuminate markers with IR LEDs, detect only reflected light, and internally calculate the markers' 2D coordinates, transmitting only coordinate data to the PC."
|
||
}
|
||
},
|
||
{
|
||
"@type": "Question",
|
||
"name": "Is there a length limit for PoE cables?",
|
||
"acceptedAnswer": {
|
||
"@type": "Answer",
|
||
"text": "According to the Ethernet standard, PoE cables support a **maximum of 100m**. Most motion capture studios easily fall within this range."
|
||
}
|
||
},
|
||
{
|
||
"@type": "Question",
|
||
"name": "Is a higher camera frame rate always better?",
|
||
"acceptedAnswer": {
|
||
"@type": "Answer",
|
||
"text": "Higher frame rates are advantageous for fast motion tracking and lower latency, but they increase data throughput and may reduce camera resolution. Generally, 120–240fps is sufficient for VTuber live and game motion capture, while 360fps or higher is used for ultra-high-speed motion analysis in sports science and similar fields."
|
||
}
|
||
},
|
||
{
|
||
"@type": "Question",
|
||
"name": "How often do marker swaps occur?",
|
||
"acceptedAnswer": {
|
||
"@type": "Answer",
|
||
"text": "If the markerset is well-designed and there are enough cameras, swaps during real-time capture are rare. However, the probability increases during fast movements or when markers are close together (such as hand clasping), and these sections are corrected in post-processing."
|
||
}
|
||
},
|
||
{
|
||
"@type": "Question",
|
||
"name": "If 2 cameras are enough for triangulation, why install 30?",
|
||
"acceptedAnswer": {
|
||
"@type": "Answer",
|
||
"text": "Two cameras is merely the theoretical minimum. In practice, you must account for occlusion (marker obstruction), accuracy variations based on camera angle, and redundancy. With 30 cameras deployed, every marker is always seen by multiple cameras, enabling stable and accurate tracking."
|
||
}
|
||
},
|
||
{
|
||
"@type": "Question",
|
||
"name": "How often does calibration need to be done?",
|
||
"acceptedAnswer": {
|
||
"@type": "Answer",
|
||
"text": "Typically, calibration is performed once at the start of each shooting day. However, during extended sessions, calibration can drift due to temperature changes or minor camera movement, so recalibration is recommended during 4–6 hour continuous shoots. Using OptiTrack Motive's Continuous Calibration feature allows real-time correction even during capture."
|
||
}
|
||
},
|
||
{
|
||
"@type": "Question",
|
||
"name": "Is it not allowed to wear shiny clothing?",
|
||
"acceptedAnswer": {
|
||
"@type": "Answer",
|
||
"text": "Because motion capture cameras detect infrared reflections, shiny materials (metal decorations, sequins, glossy synthetic fabrics, etc.) can reflect infrared light and create ghost markers. Wearing a dedicated mocap suit or comfortable clothing made of matte materials is best."
|
||
}
|
||
}
|
||
]
|
||
}
|
||
</script>
|
||
</head>
|
||
<body>
|
||
<noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-PPTNN6WD"
|
||
height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
|
||
<a href="#main-content" class="skip-to-content">Skip to content</a>
|
||
|
||
<div id="header-placeholder">
|
||
<nav class="navbar" aria-label="Navigation">
|
||
<div class="nav-container">
|
||
<div class="nav-logo">
|
||
<a href="/en">
|
||
<img src="/images/logo/mingle-logo.webp" alt="밍글 스튜디오">
|
||
<span data-i18n="header.studioName">밍글 스튜디오</span>
|
||
</a>
|
||
</div>
|
||
<ul id="nav-menu" class="nav-menu">
|
||
<li><a href="/en/about" class="nav-link" data-i18n="header.nav.about">About</a></li>
|
||
<li><a href="/en/services" class="nav-link" data-i18n="header.nav.services">Services</a></li>
|
||
<li><a href="/en/portfolio" class="nav-link" data-i18n="header.nav.portfolio">Portfolio</a></li>
|
||
<li><a href="/en/gallery" class="nav-link" data-i18n="header.nav.gallery">Gallery</a></li>
|
||
<li><a href="/en/schedule" class="nav-link" data-i18n="header.nav.schedule">Schedule</a></li>
|
||
<li><a href="/en/devlog" class="nav-link active" data-i18n="header.nav.devlog">DevLog</a></li>
|
||
<li><a href="/en/contact" class="nav-link" data-i18n="header.nav.contact">Contact</a></li>
|
||
<li><a href="/en/qna" class="nav-link" data-i18n="header.nav.qna">Q&A</a></li>
|
||
</ul>
|
||
<div class="nav-actions">
|
||
<div class="lang-switcher">
|
||
<button class="lang-btn" aria-label="Language">
|
||
<span class="lang-current">EN</span>
|
||
<svg class="lang-chevron" viewBox="0 0 10 6" width="10" height="6" aria-hidden="true">
|
||
<path d="M1 1l4 4 4-4" stroke="currentColor" stroke-width="1.5" fill="none" stroke-linecap="round" stroke-linejoin="round"></path>
|
||
</svg>
|
||
</button>
|
||
<ul class="lang-dropdown">
|
||
<li><button data-lang="ko">🇰🇷 한국어</button></li>
|
||
<li><button data-lang="en">🇺🇸 English</button></li>
|
||
<li><button data-lang="zh">🇨🇳 中文</button></li>
|
||
<li><button data-lang="ja">🇯🇵 日本語</button></li>
|
||
</ul>
|
||
</div>
|
||
<button class="theme-toggle" id="themeToggle" aria-label="Toggle dark mode">
|
||
<div class="theme-toggle-thumb">
|
||
<svg class="theme-toggle-icon theme-toggle-icon--sun" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true">
|
||
<circle cx="12" cy="12" r="5"></circle>
|
||
<line x1="12" y1="1" x2="12" y2="3"></line><line x1="12" y1="21" x2="12" y2="23"></line>
|
||
<line x1="4.22" y1="4.22" x2="5.64" y2="5.64"></line><line x1="18.36" y1="18.36" x2="19.78" y2="19.78"></line>
|
||
<line x1="1" y1="12" x2="3" y2="12"></line><line x1="21" y1="12" x2="23" y2="12"></line>
|
||
<line x1="4.22" y1="19.78" x2="5.64" y2="18.36"></line><line x1="18.36" y1="5.64" x2="19.78" y2="4.22"></line>
|
||
</svg>
|
||
<svg class="theme-toggle-icon theme-toggle-icon--moon" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true">
|
||
<path d="M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z"></path>
|
||
</svg>
|
||
</div>
|
||
</button>
|
||
<button class="hamburger" id="hamburger" aria-label="Menu" aria-expanded="false">
|
||
<span class="hamburger-line"></span>
|
||
<span class="hamburger-line"></span>
|
||
<span class="hamburger-line"></span>
|
||
</button>
|
||
</div>
|
||
</div>
|
||
</nav>
|
||
</div>
|
||
|
||
<main id="main-content">
|
||
<article class="blog-post">
|
||
<div class="blog-post-header">
|
||
<div class="container">
|
||
<a href="/en/devlog" class="blog-back-link">← Back to list</a>
|
||
<span class="blog-category">Motion Capture Technology</span>
|
||
<h1 class="blog-post-title">Complete Anatomy of the Optical Motion Capture Pipeline — From Cameras to Motion Data</h1>
|
||
<div class="blog-post-meta">
|
||
<time datetime="2026-04-05">Apr 5, 2026</time>
|
||
</div>
|
||
</div>
|
||
</div>
|
||
<div class="blog-post-body">
|
||
<div class="container">
|
||
<p>When an actor wearing a suit moves in a motion capture studio, the on-screen character follows in real time. It looks simple, but behind the scenes runs a precise technical pipeline: <strong>camera hardware → network transmission → 2D image processing → 3D reconstruction → skeleton solving → real-time streaming</strong>.</p>
|
||
<p>In this article, we dissect the entire pipeline of optical motion capture (based on OptiTrack) step by step.</p>
|
||
<hr>
|
||
<h2>Step 1: Camera Installation and Placement Strategy</h2>
|
||
<p>The first step in optical motion capture is deciding <strong>where and how to place the cameras</strong>.</p>
|
||
<p><figure class="blog-figure"><img src="/images/studio/모션캡쳐%20공간%20001.webp" alt="Mingle Studio motion capture space" loading="lazy"><figcaption>Mingle Studio motion capture space</figcaption></figure></p>
|
||
<h3>Placement Principles</h3>
|
||
<ul>
|
||
<li><strong>Height</strong>: Cameras are typically mounted at 2–3m height, angled about 30 degrees downward</li>
|
||
<li><strong>Layout</strong>: Arranged in a ring formation surrounding the capture volume (shooting space)</li>
|
||
<li><strong>Two-tier placement</strong>: Alternating cameras at high and low positions improves vertical coverage</li>
|
||
<li><strong>Overlap</strong>: Every point within the capture volume must be visible to <strong>at least 3 cameras</strong> simultaneously. Triangulation requires a minimum of 2, but 3 or more significantly improves accuracy and occlusion resilience</li>
|
||
</ul>
|
||
<h3>Relationship Between Camera Count and Accuracy</h3>
|
||
<p>More cameras means:</p>
|
||
<ul>
|
||
<li>Fewer blind spots → reduced probability of occlusion</li>
|
||
<li>More cameras seeing the same marker → improved triangulation accuracy</li>
|
||
<li>Other cameras compensate if some have issues (redundancy)</li>
|
||
</ul>
|
||
<p>At Mingle Studio, we use <strong>OptiTrack Prime 17 × 16 units + Prime 13 × 14 units</strong>, a total of 30 cameras arranged in an 8m × 7m space to minimize 360-degree blind spots.</p>
|
||
<h3>IR Pass Filter — Eyes That See Only Infrared</h3>
|
||
<p>An <strong>IR pass filter (infrared pass filter)</strong> is mounted in front of each motion capture camera lens. This filter blocks visible light and allows only infrared wavelengths (around 850nm) to pass through. This eliminates interference from fluorescent lights, sunlight, monitor glow, and other ambient lighting, allowing the camera to detect only <strong>marker light reflected from IR LEDs</strong>.</p>
|
||
<p>This filter is also the reason the studio lighting doesn't need to be completely turned off. However, direct sunlight or lighting with strong IR components can still cause interference, so studios use lighting with minimal IR emission.</p>
|
||
<h3>Frame Synchronization — How 30 Cameras Shoot Simultaneously</h3>
|
||
<p>For accurate triangulation, all cameras must trigger their shutters at <strong>exactly the same moment</strong>. If each camera captures at different timings, the position of fast-moving markers would vary between cameras, making 3D reconstruction inaccurate.</p>
|
||
<p>OptiTrack uses a <strong>hardware synchronization (Hardware Sync)</strong> method. One camera is designated as the <strong>Sync Master</strong>, generating timing signals, while the remaining cameras expose simultaneously in sync with this signal.</p>
|
||
<ul>
|
||
<li><strong>Ethernet cameras (Prime series)</strong>: The sync signal is embedded in the Ethernet connection itself or delivered through OptiTrack's eSync hub. No separate sync cable is needed.</li>
|
||
<li><strong>USB cameras (Flex series)</strong>: Cameras are connected via dedicated sync cables in a daisy chain.</li>
|
||
</ul>
|
||
<p>The precision of this synchronization is at the <strong>microsecond (μs) level</strong>, meaning all 30 cameras capture at virtually the exact same moment.</p>
|
||
<hr>
|
||
<h2>Step 2: PoE — Power and Data Through a Single Cable</h2>
|
||
<h3>What Is PoE (Power over Ethernet)?</h3>
|
||
<p>OptiTrack Prime series cameras connect via <strong>PoE (Power over Ethernet)</strong>. This technology delivers <strong>both power and data simultaneously</strong> through a single standard Ethernet cable (Cat5e/Cat6).</p>
|
||
<p><figure class="blog-figure"><img src="optical-mocap-pipeline/images/poe-switch.png" alt="PoE switch and camera connection" loading="lazy"><figcaption>PoE switch and camera connection</figcaption></figure></p>
|
||
<h3>Technical Standards</h3>
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Standard</th>
|
||
<th>Max Power</th>
|
||
<th>Notes</th>
|
||
</tr>
|
||
</thead>
|
||
<tbody><tr>
|
||
<td><strong>IEEE 802.3af (PoE)</strong></td>
|
||
<td>15.4W per port</td>
|
||
<td>Sufficient for standard motion capture cameras</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>IEEE 802.3at (PoE+)</strong></td>
|
||
<td>25.5W per port</td>
|
||
<td>For high-frame-rate cameras or those with high IR LED output</td>
|
||
</tr>
|
||
</tbody></table>
|
||
<p>OptiTrack cameras typically consume around <strong>5–12W</strong>, well within the PoE standard range.</p>
|
||
<h3>Network Topology</h3>
|
||
<p>Cameras are connected in a <strong>star topology</strong>. Each camera connects 1:1 to an individual port on the PoE switch. Daisy chaining (serial connection) is not used.</p>
|
||
<div class="network-diagram">
|
||
<div class="network-cameras">
|
||
<div class="network-cam"><div class="network-cam-icon">CAM 1</div></div>
|
||
<div class="network-cam"><div class="network-cam-icon">CAM 2</div></div>
|
||
<div class="network-cam"><div class="network-cam-icon">CAM 3</div></div>
|
||
<div class="network-cam"><div class="network-cam-icon">···</div></div>
|
||
<div class="network-cam"><div class="network-cam-icon">CAM N</div></div>
|
||
</div>
|
||
<svg class="network-lines" viewBox="0 0 100 200" preserveAspectRatio="none">
|
||
<line x1="0" y1="20" x2="100" y2="45" />
|
||
<line x1="0" y1="55" x2="100" y2="45" />
|
||
<line x1="0" y1="90" x2="100" y2="45" />
|
||
<line x1="0" y1="125" x2="100" y2="45" />
|
||
<line x1="0" y1="160" x2="100" y2="45" />
|
||
</svg>
|
||
<div class="network-center">
|
||
<div class="network-switch">PoE Switch</div>
|
||
<div class="network-link"></div>
|
||
<div class="network-pc">Host PC</div>
|
||
</div>
|
||
</div>
|
||
|
||
<p>For 30 cameras, you would combine a 24-port + 8-port PoE+ switch or use a 48-port switch. When selecting a switch, you must verify the <strong>total PoE power budget</strong> (e.g., 30 cameras × 12W = 360W).</p>
|
||
<h3>Advantages of PoE</h3>
|
||
<ul>
|
||
<li><strong>One cable does it all</strong> — no need for separate power adapters for each ceiling-mounted camera</li>
|
||
<li><strong>Clean installation</strong> — cable count is cut in half, simplifying installation and management</li>
|
||
<li><strong>Centralized power management</strong> — cameras can be collectively powered ON/OFF from the switch</li>
|
||
</ul>
|
||
<hr>
|
||
<h2>Step 3: What the Camera Sends — 2D Centroids</h2>
|
||
<p>Understanding what data is transmitted from cameras to the PC is the key to the pipeline.</p>
|
||
<p><figure class="blog-figure"><img src="optical-mocap-pipeline/images/motive-2d-centroid.png" alt="Motive camera 2D view — markers displayed as bright dots" loading="lazy"><figcaption>Motive camera 2D view — markers displayed as bright dots</figcaption></figure></p>
|
||
<h3>Camera Internal Processing</h3>
|
||
<p>Each OptiTrack camera has an <strong>infrared (IR) LED ring</strong> mounted around the camera lens. These LEDs emit infrared light, which is reflected back toward the camera by <strong>retroreflective markers</strong> attached to the actor's body. The camera sensor captures this reflected light as a grayscale IR image.</p>
|
||
<p>The important point here is that the camera <strong>does not send this raw image directly to the PC</strong>. The camera's internal processor handles it first:</p>
|
||
<p><strong>1. Thresholding</strong>
|
||
Only pixels above a certain brightness threshold are kept; the rest are discarded. Since only markers reflecting infrared light appear bright, this process separates markers from the background.</p>
|
||
<p><strong>2. Blob Detection</strong>
|
||
Clusters of bright pixels (blobs) are recognized as individual marker candidates.</p>
|
||
<p><strong>3. 2D Centroid Calculation</strong>
|
||
The <strong>precise center point (centroid)</strong> of each blob is calculated with sub-pixel precision (approximately 0.1 pixels). This uses a weighted average method where the brightness of each pixel within the blob serves as the weight.</p>
|
||
<h3>Data Transmitted to the PC</h3>
|
||
<p>In the default tracking mode, what the camera sends to the PC is <strong>2D centroid data</strong>:</p>
|
||
<ul>
|
||
<li><strong>(x, y) coordinates</strong> + size information for each marker candidate</li>
|
||
<li>Extremely small data — only a few hundred bytes per frame per camera</li>
|
||
</ul>
|
||
<p>Thanks to this small data volume, <strong>40+ cameras can operate on a single Gigabit Ethernet connection</strong>. Raw grayscale images can also be transmitted (for debugging/visualization), but this requires several MB/s per camera and is not used during normal tracking.</p>
|
||
<blockquote>
|
||
<p>In other words, the camera is not "a device that captures and sends video" but rather closer to <strong>"a sensor that calculates marker positions and sends only coordinates."</strong></p>
|
||
</blockquote>
|
||
<p>You might wonder — why are motion capture cameras so expensive compared to regular cameras? The answer lies in the process described above. Regular cameras simply send the captured footage as-is, but motion capture cameras have <strong>a dedicated onboard processor</strong> that performs thresholding, blob detection, and sub-pixel centroid calculation in real time at 240–360 frames per second. Each camera essentially contains <strong>a small computer dedicated to image processing</strong>.</p>
|
||
<hr>
|
||
<h2>Step 4: Calibration — Aligning the Camera Eyes</h2>
|
||
<p>There is a mandatory process before 3D reconstruction can happen. The software must determine each camera's <strong>exact position, orientation, and lens characteristics</strong> — this is <strong>calibration</strong>.</p>
|
||
<p><figure class="blog-figure"><img src="optical-mocap-pipeline/images/calibration-tools.webp" alt="Calibration wand (left) and ground plane frame (right)" loading="lazy"><figcaption>Calibration wand (left) and ground plane frame (right)</figcaption></figure></p>
|
||
<h3>Wanding — Scanning the Space</h3>
|
||
<p>An operator walks through the entire capture volume while waving a <strong>calibration wand</strong> — a rod with LEDs or markers attached. Since the distances between the wand's markers are precisely known, when each camera captures the wand over thousands of frames, the software can calculate:</p>
|
||
<ul>
|
||
<li><strong>Intrinsic Parameters</strong> — characteristics unique to the camera lens, such as focal length and lens distortion coefficients</li>
|
||
<li><strong>Extrinsic Parameters</strong> — the camera's exact position and orientation in 3D space</li>
|
||
</ul>
|
||
<p>This calculation uses an optimization algorithm called <strong>Bundle Adjustment</strong>. It simultaneously optimizes all camera parameters based on thousands of 2D observation data points.</p>
|
||
<h3>Ground Plane Setup</h3>
|
||
<p>After wanding, an <strong>L-shaped calibration frame (Ground Plane)</strong> is placed on the floor. Three or more markers on this frame define the floor plane and coordinate origin:</p>
|
||
<ul>
|
||
<li>Where (0, 0, 0) is (the origin)</li>
|
||
<li>Which directions are the X, Y, Z axes</li>
|
||
<li>The height reference of the floor plane</li>
|
||
</ul>
|
||
<p>Once calibration is complete, the software can convert any camera's 2D coordinates into an accurate 3D ray.</p>
|
||
<h3>Calibration Quality</h3>
|
||
<p>Motive software displays the <strong>reprojection error</strong> for each camera after calibration. The smaller this value (typically 0.5px or below), the more accurate the calibration. Cameras with large errors are repositioned or recalibrated.</p>
|
||
<hr>
|
||
<h2>Step 5: 2D → 3D Reconstruction (Triangulation)</h2>
|
||
<p>Let's examine how the 2D centroids arriving at the PC are converted into 3D coordinates.</p>
|
||
<h3>Triangulation Principle</h3>
|
||
<ol>
|
||
<li>Utilizing each camera's <strong>exact 3D position, orientation, and lens characteristics</strong> obtained through calibration</li>
|
||
<li>Casting a <strong>ray</strong> from the camera's 2D centroid coordinate — a straight line extending from the camera position through the centroid direction into 3D space</li>
|
||
<li>The <strong>point where rays from 2 or more cameras viewing the same marker intersect</strong> is the marker's 3D coordinate</li>
|
||
</ol>
|
||
<p><video src="optical-mocap-pipeline/images/continuous-calibration-web.mp4" autoplay loop muted playsinline style="width:100%;border-radius:12px;margin:1.5rem 0;"></video></p>
|
||
<h3>In Reality, Rays Don't Intersect Perfectly</h3>
|
||
<p>Due to noise, lens distortion, calibration errors, and other factors, rays almost never meet at a single exact point. That's why <strong>Least Squares Optimization</strong> is used:</p>
|
||
<ul>
|
||
<li>Calculates the 3D coordinate where the sum of distances to all rays is minimized</li>
|
||
<li>The distance between each ray and the reconstructed 3D point is called the <strong>residual</strong></li>
|
||
<li>Smaller residuals mean better reconstruction quality — in a well-calibrated OptiTrack system, <strong>sub-millimeter residuals (below 0.5mm)</strong> can be expected</li>
|
||
</ul>
|
||
<h3>Impact of Camera Count</h3>
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Number of cameras seeing the marker</th>
|
||
<th>Effect</th>
|
||
</tr>
|
||
</thead>
|
||
<tbody><tr>
|
||
<td><strong>2</strong></td>
|
||
<td>3D reconstruction possible (minimum requirement)</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>3</strong></td>
|
||
<td>Improved accuracy + tracking maintained even if 1 camera is occluded</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>4 or more</strong></td>
|
||
<td>High accuracy + strong occlusion resilience</td>
|
||
</tr>
|
||
</tbody></table>
|
||
<hr>
|
||
<h2>Step 6: Marker Identification and Labeling</h2>
|
||
<h3>Marker Suit and Marker Placement</h3>
|
||
<p>To turn 3D reconstruction into meaningful motion data, markers must be attached at <strong>precise locations</strong> on the body.</p>
|
||
<p><strong>Marker Specifications</strong></p>
|
||
<ul>
|
||
<li>Diameter: Typically <strong>12–19mm</strong> spherical retroreflective markers</li>
|
||
<li>Material: Foam/plastic spheres coated with 3M retroreflective tape</li>
|
||
<li>Attachment: Velcro, double-sided tape, or pre-mounted on dedicated marker suits</li>
|
||
</ul>
|
||
<p><strong>Markerset Standards</strong>
|
||
The number and placement of markers follow standardized <strong>markerset</strong> specifications:</p>
|
||
<ul>
|
||
<li><strong>Baseline (37 markers)</strong> — OptiTrack's default full-body markerset. Covers upper body, lower body, and head; the most commonly used for game/video motion capture</li>
|
||
<li><strong>Baseline + Fingers (~57 markers)</strong> — Extended version adding approximately 20 finger markers</li>
|
||
<li><strong>Helen Hayes (~15–19 markers)</strong> — Medical/gait analysis standard. A minimal markerset focused on the lower body</li>
|
||
</ul>
|
||
<p>Markers are placed at <strong>anatomical landmarks where bones protrude</strong> (acromion, lateral epicondyle, anterior superior iliac spine, etc.). These locations most accurately reflect bone movement through the skin and minimize skin artifact.</p>
|
||
<p>After 3D reconstruction, each frame produces a <strong>cloud of unnamed 3D points (Point Cloud)</strong>. The process of determining "is this point the left knee marker or the right shoulder marker?" is <strong>labeling</strong>.</p>
|
||
<p><figure class="blog-figure"><img src="optical-mocap-pipeline/images/marker-labeling.png" alt="Markers labeled in Motive" loading="lazy"><figcaption>Markers labeled in Motive</figcaption></figure></p>
|
||
<h3>Labeling Algorithms</h3>
|
||
<p><strong>Template Matching</strong>
|
||
Based on the geometric arrangement of the markerset defined during calibration (e.g., the distance between knee and ankle markers), the current frame's 3D points are compared against the template.</p>
|
||
<p><strong>Predictive Tracking</strong>
|
||
Based on velocity and acceleration from previous frames, the software predicts where each marker will be in the next frame and matches it to the nearest 3D point.</p>
|
||
<h3>Marker Swap Problem</h3>
|
||
<p>When two markers pass very close to each other, the software may <strong>swap their labels</strong> — a phenomenon where labels are exchanged. This is one of the most common artifacts in optical mocap.</p>
|
||
<p>Solutions:</p>
|
||
<ul>
|
||
<li>Manually correct labels in post-processing</li>
|
||
<li>Design marker placement to be <strong>asymmetric</strong> for easier differentiation</li>
|
||
<li>Use <strong>active markers</strong> — each marker emits a unique infrared pattern, enabling hardware-level identification and completely eliminating swaps</li>
|
||
</ul>
|
||
<h3>Passive vs Active Markers</h3>
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Category</th>
|
||
<th>Passive Markers (Reflective)</th>
|
||
<th>Active Markers (Self-emitting)</th>
|
||
</tr>
|
||
</thead>
|
||
<tbody><tr>
|
||
<td><strong>Principle</strong></td>
|
||
<td>Reflects light from camera IR LEDs</td>
|
||
<td>Each marker emits a unique IR pattern</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>Identification</strong></td>
|
||
<td>Software-based (swap possible)</td>
|
||
<td>Hardware-based (no swaps)</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>Advantages</strong></td>
|
||
<td>Lightweight, inexpensive, easy to attach</td>
|
||
<td>Auto-identification, no labeling errors</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>Disadvantages</strong></td>
|
||
<td>May require post-processing labeling</td>
|
||
<td>Heavier, requires battery/power</td>
|
||
</tr>
|
||
</tbody></table>
|
||
<p>In most entertainment/VTuber production environments, <strong>passive markers</strong> are primarily used. They are lightweight and comfortable, and software performance is good enough that automatic labeling works well in most situations.</p>
|
||
<hr>
|
||
<h2>Step 7: Skeleton Solving — From Points to a Skeletal Structure</h2>
|
||
<p>This step maps labeled 3D markers to a human <strong>skeleton</strong> structure.</p>
|
||
<h3>Pre-Calibration</h3>
|
||
<p>Before shooting, the actor strikes a <strong>T-pose</strong> (arms outstretched), and the software calculates bone lengths (arm length, leg length, etc.) and joint positions based on marker locations.</p>
|
||
<p>This is followed by a <strong>ROM (Range of Motion) capture</strong>.</p>
|
||
<p><figure class="blog-figure"><img src="optical-mocap-pipeline/images/rom-grid.webp" alt="ROM capture — calibrating joint ranges through various movements" loading="lazy"><figcaption>ROM capture — calibrating joint ranges through various movements</figcaption></figure>
|
||
Through various movements such as arm circles, knee bends, and torso twists, the software precisely calibrates <strong>joint center points and rotation axes</strong>.</p>
|
||
<h3>Real-Time Solving</h3>
|
||
<p>During capture, for every frame:</p>
|
||
<ol>
|
||
<li>Receives labeled 3D marker coordinates</li>
|
||
<li>Calculates the <strong>3D position and rotation</strong> of each joint based on marker positions</li>
|
||
<li>Algorithms such as <strong>Inverse Kinematics</strong> compute a natural skeletal pose</li>
|
||
<li>Result: <strong>Translation + Rotation</strong> data for all joints across the timeline</li>
|
||
</ol>
|
||
<h3>Rigid Body Tracking (Prop Tracking)</h3>
|
||
<p>By attaching <strong>3 or more markers in an asymmetric pattern</strong> to props like swords, guns, or cameras, the software recognizes the marker cluster as a single rigid body, enabling <strong>6DOF (3 axes of position + 3 axes of rotation)</strong> tracking.</p>
|
||
<hr>
|
||
<h2>Step 8: Real-Time Streaming and Data Output</h2>
|
||
<h3>Real-Time Streaming</h3>
|
||
<p><figure class="blog-figure"><img src="optical-mocap-pipeline/images/realtime-streaming.png" alt="Real-time streaming — sending motion data from Motive to a game engine" loading="lazy"><figcaption>Real-time streaming — sending motion data from Motive to a game engine</figcaption></figure></p>
|
||
<p>OptiTrack Motive delivers solved data to external software in real time:</p>
|
||
<ul>
|
||
<li><strong>NatNet SDK</strong> — OptiTrack's proprietary protocol, UDP-based low-latency transmission</li>
|
||
<li><strong>VRPN</strong> — A standard protocol in the VR/mocap field</li>
|
||
</ul>
|
||
<p>This enables real-time character animation in <strong>Unity, Unreal Engine, MotionBuilder</strong>, and more. VTuber live broadcasts are possible thanks to this real-time streaming.</p>
|
||
<h3>Recorded Data Output Formats</h3>
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Format</th>
|
||
<th>Use Case</th>
|
||
</tr>
|
||
</thead>
|
||
<tbody><tr>
|
||
<td><strong>FBX</strong></td>
|
||
<td>Skeleton + animation data, compatible with game engines/DCC tools</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>BVH</strong></td>
|
||
<td>Hierarchical motion data, primarily used for retargeting</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>C3D</strong></td>
|
||
<td>Raw 3D marker data, biomechanics/research standard</td>
|
||
</tr>
|
||
</tbody></table>
|
||
<hr>
|
||
<h2>Step 9: Post-Processing — Refining the Data</h2>
|
||
<p><figure class="blog-figure"><img src="optical-mocap-pipeline/images/post-processing.png" alt="Post-processing — cleaning up motion data in Motive" loading="lazy"><figcaption>Post-processing — cleaning up motion data in Motive</figcaption></figure></p>
|
||
<p>Data from real-time capture can sometimes be used as-is, but most professional work involves a <strong>post-processing</strong> stage.</p>
|
||
<h3>Gap Filling</h3>
|
||
<p>This fills gaps where markers temporarily disappeared due to occlusion using <strong>interpolation</strong>.</p>
|
||
<ul>
|
||
<li><strong>Linear interpolation</strong> — Simply connects the frames before and after with a straight line. Suitable for short gaps</li>
|
||
<li><strong>Spline interpolation</strong> — Fills with smooth curves. Better for maintaining natural movement</li>
|
||
<li><strong>Pattern-based interpolation</strong> — References data from other takes of the same repeated movement</li>
|
||
</ul>
|
||
<p>The longer the gap, the less accurate the interpolation, which is why minimizing occlusion during shooting is most important.</p>
|
||
<h3>Smoothing and Filtering</h3>
|
||
<p>Captured data may contain subtle jitter (high-frequency noise). To remove this:</p>
|
||
<ul>
|
||
<li><strong>Butterworth filter</strong> — A low-pass filter that removes noise above a specified frequency</li>
|
||
<li><strong>Gaussian smoothing</strong> — Reduces jitter using a weighted average of surrounding frames</li>
|
||
</ul>
|
||
<p>However, excessive smoothing can cause loss of <strong>detail and impact</strong> in the motion, so the strength must be set appropriately to avoid blurring sharp movements like sword swings.</p>
|
||
<h3>Marker Swap Correction</h3>
|
||
<p>This involves finding sections where marker swaps (described in Step 6) occurred and manually correcting the labels. In Motive, you can visually inspect and correct marker trajectories on the timeline.</p>
|
||
<h3>Retargeting</h3>
|
||
<p>The process of applying captured skeleton data to <strong>a character with different proportions</strong>. For example, to apply motion data from a 170cm actor to a 3m giant character or a 150cm child character, joint rotations must be preserved while bone lengths are recalculated to match the target character. MotionBuilder, Maya, Unreal Engine, and others provide retargeting functionality.</p>
|
||
<hr>
|
||
<h2>Step 10: Common On-Set Issues and Solutions</h2>
|
||
<p>Even seemingly perfect optical mocap encounters real-world challenges on set.</p>
|
||
<h3>Stray Reflections</h3>
|
||
<p>Infrared light reflecting off objects other than markers creates <strong>ghost markers</strong> — false marker detections.</p>
|
||
<ul>
|
||
<li>Causes: Metal surfaces, shiny clothing, glasses, watches, floor reflections, etc.</li>
|
||
<li>Solution: Cover reflective surfaces with matte tape, or use <strong>masking</strong> in Motive to tell the software to ignore those areas</li>
|
||
</ul>
|
||
<h3>Marker Detachment</h3>
|
||
<p>Markers may fall off the suit or shift position during intense movements.</p>
|
||
<ul>
|
||
<li>Solution: Carefully check marker attachment before shooting; for vigorous motion capture, combine Velcro + double-sided tape for stronger adhesion</li>
|
||
<li>It's also important to periodically monitor marker condition during sessions</li>
|
||
</ul>
|
||
<h3>Clothing Restrictions</h3>
|
||
<p>Actors should ideally wear <strong>light-colored, matte-material clothing</strong> during capture. Black doesn't affect marker reflection, but shiny materials or loose clothing can cause unstable marker positions or stray reflections. Wearing a dedicated mocap suit is the most reliable option.</p>
|
||
<h3>Calibration Maintenance</h3>
|
||
<p>Calibration can gradually drift due to temperature changes within the capture volume, camera vibrations, or minor tripod shifts. For extended shooting sessions, it's recommended to <strong>recalibrate</strong> midway, or use Motive's <strong>Continuous Calibration</strong> feature for real-time correction during capture.</p>
|
||
<hr>
|
||
<h2>Latency — How Long From Movement to Screen?</h2>
|
||
<p>Here is the time breakdown for each stage of the pipeline.</p>
|
||
<table>
|
||
<thead>
|
||
<tr>
|
||
<th>Stage</th>
|
||
<th>Duration</th>
|
||
</tr>
|
||
</thead>
|
||
<tbody><tr>
|
||
<td>Camera exposure (at 240fps)</td>
|
||
<td>~4.2ms</td>
|
||
</tr>
|
||
<tr>
|
||
<td>Camera internal processing (centroid calculation)</td>
|
||
<td>~0.5–1ms</td>
|
||
</tr>
|
||
<tr>
|
||
<td>Network transmission (PoE → PC)</td>
|
||
<td>< 1ms</td>
|
||
</tr>
|
||
<tr>
|
||
<td>3D reconstruction + labeling</td>
|
||
<td>~1–2ms</td>
|
||
</tr>
|
||
<tr>
|
||
<td>Skeleton solving</td>
|
||
<td>~0.5–1ms</td>
|
||
</tr>
|
||
<tr>
|
||
<td>Streaming output (NatNet)</td>
|
||
<td>< 1ms</td>
|
||
</tr>
|
||
<tr>
|
||
<td><strong>Total end-to-end latency</strong></td>
|
||
<td><strong>Approx. 8–14ms (at 240fps)</strong></td>
|
||
</tr>
|
||
</tbody></table>
|
||
<p>At 360fps, the exposure time decreases, making latencies <strong>below 7ms</strong> achievable. This level of latency is imperceptible to humans, enabling natural real-time response even in VTuber live broadcasts.</p>
|
||
<blockquote>
|
||
<p>Note: Most of the latency comes from the <strong>camera exposure time (frame period)</strong>. This is why higher frame rates result in lower latency.</p>
|
||
</blockquote>
|
||
<hr>
|
||
<h2>Full Pipeline Summary</h2>
|
||
<div class="pipeline-flow">
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">1. Camera Installation · IR Filter · Frame Sync</div>
|
||
<p class="pipeline-step-desc">30 cameras arranged in a ring, IR pass filters detect infrared only, hardware sync at μs precision</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">2. PoE Network</div>
|
||
<p class="pipeline-step-desc">Single Cat6 cable carries power + data, star topology connection to switch</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">3. Camera Onboard Processing → 2D Centroids</div>
|
||
<p class="pipeline-step-desc">IR LED emission → marker reflection received → thresholding → blob detection → sub-pixel centroid calculation → coordinates transmitted</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">4. Calibration</div>
|
||
<p class="pipeline-step-desc">Wanding to determine camera intrinsic/extrinsic parameters, ground plane to define coordinate system</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">5. 2D → 3D Triangulation</div>
|
||
<p class="pipeline-step-desc">Ray intersection from multiple cameras' 2D coordinates + least squares optimization to reconstruct 3D coordinates</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">6. Marker Labeling</div>
|
||
<p class="pipeline-step-desc">Template matching + predictive tracking to assign marker names to each 3D point</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">7. Skeleton Solving</div>
|
||
<p class="pipeline-step-desc">Based on T-pose + ROM calibration, inverse kinematics to calculate joint positions and rotations</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">8. Real-Time Streaming · Data Output</div>
|
||
<p class="pipeline-step-desc">Real-time transmission to Unity/Unreal/MotionBuilder via NatNet/VRPN, recording in FBX/BVH/C3D</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">9. Post-Processing</div>
|
||
<p class="pipeline-step-desc">Gap filling · smoothing · marker swap correction · retargeting</p>
|
||
</div>
|
||
<div class="pipeline-arrow">↓</div>
|
||
<div class="pipeline-step">
|
||
<div class="pipeline-step-title">Final Output</div>
|
||
<p class="pipeline-step-desc">Applied to game cinematics · VTuber live · video content (total latency approx. 8–14ms)</p>
|
||
</div>
|
||
</div>
|
||
|
||
<p>The camera does not send raw footage to the PC — instead, the camera calculates marker coordinates internally and sends only those, while the PC reconstructs them in 3D and maps them to a skeleton. This is the core principle of optical motion capture.</p>
|
||
<hr>
|
||
<h2>Frequently Asked Questions (FAQ)</h2>
|
||
<p><strong>Q. How is an optical motion capture camera different from a regular camera?</strong></p>
|
||
<p>Regular cameras capture full-color video, but motion capture cameras are specialized for the infrared (IR) spectrum. They illuminate markers with IR LEDs, detect only reflected light, and internally calculate the markers' 2D coordinates, transmitting only coordinate data to the PC.</p>
|
||
<p><strong>Q. Is there a length limit for PoE cables?</strong></p>
|
||
<p>According to the Ethernet standard, PoE cables support a <strong>maximum of 100m</strong>. Most motion capture studios easily fall within this range.</p>
|
||
<p><strong>Q. Is a higher camera frame rate always better?</strong></p>
|
||
<p>Higher frame rates are advantageous for fast motion tracking and lower latency, but they increase data throughput and may reduce camera resolution. Generally, 120–240fps is sufficient for VTuber live and game motion capture, while 360fps or higher is used for ultra-high-speed motion analysis in sports science and similar fields.</p>
|
||
<p><strong>Q. How often do marker swaps occur?</strong></p>
|
||
<p>If the markerset is well-designed and there are enough cameras, swaps during real-time capture are rare. However, the probability increases during fast movements or when markers are close together (such as hand clasping), and these sections are corrected in post-processing.</p>
|
||
<p><strong>Q. If 2 cameras are enough for triangulation, why install 30?</strong></p>
|
||
<p>Two cameras is merely the theoretical minimum. In practice, you must account for occlusion (marker obstruction), accuracy variations based on camera angle, and redundancy. With 30 cameras deployed, every marker is always seen by multiple cameras, enabling stable and accurate tracking.</p>
|
||
<p><strong>Q. How often does calibration need to be done?</strong></p>
|
||
<p>Typically, calibration is performed once at the start of each shooting day. However, during extended sessions, calibration can drift due to temperature changes or minor camera movement, so recalibration is recommended during 4–6 hour continuous shoots. Using OptiTrack Motive's Continuous Calibration feature allows real-time correction even during capture.</p>
|
||
<p><strong>Q. Is it not allowed to wear shiny clothing?</strong></p>
|
||
<p>Because motion capture cameras detect infrared reflections, shiny materials (metal decorations, sequins, glossy synthetic fabrics, etc.) can reflect infrared light and create ghost markers. Wearing a dedicated mocap suit or comfortable clothing made of matte materials is best.</p>
|
||
<hr>
|
||
<p>If you have further questions about the technical structure of optical motion capture, feel free to ask on our <a href="/contact">contact page</a>. If you'd like to experience it firsthand at Mingle Studio, check out our <a href="/services">services page</a>.</p>
|
||
|
||
</div>
|
||
</div>
|
||
<div class="blog-post-footer">
|
||
<div class="container">
|
||
<a href="/en/devlog" class="blog-back-btn"><i class="fas fa-arrow-left"></i> ← Back to list</a>
|
||
</div>
|
||
</div>
|
||
</article>
|
||
</main>
|
||
|
||
<div id="footer-placeholder"></div>
|
||
|
||
<script src="/js/i18n.js"></script>
|
||
<script src="/js/common.js"></script>
|
||
</body>
|
||
</html> |