ROBOTIKA — REFERENSI TEKNIS
// Bahan Bacaan Teknis

ROBOTIKA
KOMPREHENSIF

Dokumen referensi lengkap mencakup deep learning, LLM, SLAM, inverse kinematics, elektronika, pemrograman Python, mechanical engineering, offensive security, dan manufacturing robot. Dirancang sebagai bahan pengingatan cepat yang padat dan teknis.

Deep Learning LLM / Embodied AI SLAM Inverse Kinematics Electronics Python / ROS2 Offensive Security Manufacturing
00
Pengantar

ARSITEKTUR ROBOTIKA

Robot modern adalah sistem cyber-physical yang memadukan persepsi, pemrosesan, dan aktuasi. Tiga domain utama saling berinteraksi:

🧠
Kognisi
Deep Learning, LLM, SLAM, perencanaan gerak, pengambilan keputusan.
⚙️
Fisik
Aktuator, sensor, elektronika daya, desain mekanik, material.
💻
Software
ROS2, middleware, driver, algoritma kontrol real-time.
🔒
Keamanan
Attack surface ROS, firmware, CAN bus, jaringan robot.

Taxonomi Robot

KategoriContohDoFLingkungan
Manipulator SerialUR5, KUKA KR6, Franka Emika6–7Terstruktur / industri
Mobile Robot WheeledTurtleBot, ROS Car, AGV2–3Lantai datar
QuadrupedBoston Dynamics Spot, ANYmal12+Uneven terrain
HumanoidAtlas, Unitree H1, Figure 0128–44Human environment
UAVDJI, PX4-based, Crazyflie4–6Udara
SurgicalDa Vinci, MAKO7+Medis steril

Loop Kontrol Robot

# Sense → Plan → Act (SPA) loop dasar
while True:
    # 1. SENSE: baca sensor
    state = get_sensor_data()        # IMU, kamera, encoder
    
    # 2. ESTIMATE: estimasi state dunia
    world_model = slam_update(state)  # posisi, peta
    
    # 3. PLAN: rencanakan aksi
    goal = high_level_planner(world_model)
    path = motion_planner(goal, world_model)
    
    # 4. ACT: kirim ke aktuator
    joint_angles = inverse_kinematics(path.next_pose())
    actuate(joint_angles)
    
    # 5. EVALUATE: learning / reward
    reward = compute_reward(state, goal)
    update_policy(reward)
01
AI & Kecerdasan

DEEP LEARNING
FOR ROBOTICS

Deep learning memungkinkan robot mempelajari pola kompleks dari data mentah sensor — mulai dari piksel kamera hingga sinyal LiDAR — tanpa feature engineering manual.

Computer Vision untuk Robotika

Arsitektur CNN untuk Persepsi Robot

ArsitekturKegunaanKelebihanTradeoff
ResNet-50/101Feature extraction, backbonePretrained ImageNet, stabilBerat, 25M param
MobileNetV3Edge deployment (Jetson Nano)Ringan, fast inferenceAccuracy lebih rendah
EfficientDetObject detectionScalable, efisienKompleks training
YOLOv8/v11Real-time detectionSangat cepat, ≤1msKurang presisi kecil
Vision TransformerSegmentasi, depthGlobal contextButuh banyak data
SAM (Segment Anything)Segmentasi universalZero-shot segmentationBerat, 632M param

Implementasi Object Detection dengan YOLOv8

from ultralytics import YOLO
import cv2
import numpy as np

# Load model — gunakan yolov8n (nano) untuk edge device
model = YOLO('yolov8n.pt')

def detect_objects_for_robot(frame, conf_thresh=0.5):
    """
    Deteksi objek untuk robot grasp planning.
    Returns: list of (bbox, class, confidence, depth_estimate)
    """
    results = model(frame, conf=conf_thresh, verbose=False)
    
    detections = []
    for r in results:
        for box in r.boxes:
            x1, y1, x2, y2 = map(int, box.xyxy[0])
            cls_name = r.names[int(box.cls)]
            conf = float(box.conf)
            
            # Estimasi posisi 3D dari bounding box (tanpa depth camera)
            cx = (x1 + x2) / 2
            cy = (y1 + y2) / 2
            area = (x2 - x1) * (y2 - y1)
            
            # Heuristic depth: makin kecil area, makin jauh
            approx_depth = 1000 / (area ** 0.5 + 1e-5)
            
            detections.append({
                'bbox': (x1, y1, x2, y2),
                'class': cls_name,
                'confidence': conf,
                'center': (cx, cy),
                'depth_estimate': approx_depth
            })
    
    return detections

# Depth estimation dengan MiDaS
import torch

midas = torch.hub.load('intel-isl/MiDaS', 'MiDaS_small')
midas.eval()

transforms = torch.hub.load('intel-isl/MiDaS', 'transforms')
transform = transforms.small_transform

def estimate_depth(img_bgr):
    img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
    input_batch = transform(img_rgb).to('cpu')
    with torch.no_grad():
        prediction = midas(input_batch)
        prediction = torch.nn.functional.interpolate(
            prediction.unsqueeze(1),
            size=img_rgb.shape[:2],
            mode='bicubic', align_corners=False
        ).squeeze()
    return prediction.numpy()

Reinforcement Learning untuk Robot Control

RL memungkinkan robot belajar policy optimal melalui trial-and-error dengan reward signal. Tiga paradigma utama:

🎯
Model-Free RL
PPO, SAC, TD3. Robot belajar langsung dari interaksi. Tidak butuh model lingkungan.
🗺️
Model-Based RL
MBPO, Dreamer. Bangun internal model, lalu plan di dalam model. Sample efisien.
👁️
Imitation Learning
BC, GAIL, DAgger. Robot belajar dari demonstrasi manusia. Tidak butuh reward design.

PPO (Proximal Policy Optimization) — Algoritma Paling Populer

LCLIP(θ) = E[min(rt(θ) Ât, clip(rt(θ), 1−ε, 1+ε) Ât)] dimana: rt(θ) = πθ(at|st) / πθ_old(at|st) (probability ratio) Ât = keuntungan (advantage) — GAE atau TD residual ε = clipping threshold, biasanya 0.1–0.2
# PPO training loop sederhana untuk robot arm (PyTorch)
import torch, torch.nn as nn
import numpy as np

class ActorCritic(nn.Module):
    def __init__(self, obs_dim, act_dim):
        super().__init__()
        # Shared feature extractor
        self.shared = nn.Sequential(
            nn.Linear(obs_dim, 256), nn.Tanh(),
            nn.Linear(256, 256), nn.Tanh(),
        )
        # Actor: output mean dan log_std untuk continuous action
        self.actor_mean = nn.Linear(256, act_dim)
        self.actor_log_std = nn.Parameter(torch.zeros(act_dim))
        # Critic: output state value V(s)
        self.critic = nn.Linear(256, 1)
    
    def forward(self, obs):
        feat = self.shared(obs)
        mean = self.actor_mean(feat)
        std  = torch.exp(self.actor_log_std)
        dist = torch.distributions.Normal(mean, std)
        value = self.critic(feat)
        return dist, value
    
    def get_action(self, obs):
        dist, _ = self.forward(obs)
        action = dist.sample()
        log_prob = dist.log_prob(action).sum(-1)
        return action, log_prob

def compute_gae(rewards, values, dones, gamma=0.99, lam=0.95):
    """Generalized Advantage Estimation"""
    advantages = np.zeros_like(rewards)
    last_gae = 0
    for t in reversed(range(len(rewards))):
        delta = rewards[t] + gamma * values[t+1] * (1-dones[t]) - values[t]
        last_gae = delta + gamma * lam * (1-dones[t]) * last_gae
        advantages[t] = last_gae
    return advantages

Sim-to-Real Transfer

Masalah utama RL untuk robot fisik: sim-to-real gap. Solusi:

Domain Randomization
Variasikan parameter fisik saat training: massa, friction, damping, pencahayaan. Robot belajar policy yang robust terhadap variasi.
Domain Adaptation
Gunakan teknik seperti DANN atau CycleGAN untuk meminimalkan distribusi gap antara sim dan real sensor data.
Fine-tuning pada Real Robot
Pretrain di simulator (jutaan langkah), lalu fine-tune dengan data real terbatas (ribuan langkah). Perlu sample-efficient algorithm (MAML, SAC).
💡 Tip Praktis
Untuk robot manipulation, gunakan IsaacGym (NVIDIA) atau MuJoCo sebagai simulator. IsaacGym dapat menjalankan ribuan environment paralel di GPU untuk training PPO yang 100x lebih cepat.
02
AI & Kecerdasan

LLM UNTUK
EMBODIED AI

Large Language Model mengubah cara robot memahami perintah bahasa alami, merencanakan tugas kompleks, dan berinteraksi dengan manusia. Ini fondasi Embodied AI — robot yang "mengerti" dunia secara semantik.

Model Fondasi untuk Robotika

ModelOrganisasiKemampuanInput
RT-2Google DeepMindVision-language-action, zero-shot generalizationImage + Text → Action
PaLM-EGoogleMultimodal reasoning, grounding visual inputImage + Text → Plan
SayCanGoogle BrainLLM + affordance model, task decompositionText → API calls
ChatGPT/GPT-4oOpenAITask planning, code generation, NLUText + Image
OpenVLAStanfordOpen-source VLA, fine-tunableImage + Text → Action
π₀ (Pi Zero)Physical IntelligenceGeneralist robot policyMulti-modal

Arsitektur: LLM sebagai Task Planner

"""
Pola: LLM → High-Level Plan → Motion Primitive Executor
LLM tidak mengontrol joint langsung, tapi decompose task menjadi
fungsi-fungsi primitif yang sudah diimplementasikan.
"""

import openai
import json

# Definisikan robot API primitives sebagai tools untuk LLM
ROBOT_TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "move_to",
            "description": "Move robot end-effector to target position",
            "parameters": {
                "type": "object",
                "properties": {
                    "x": {"type": "number", "description": "X coordinate in meters"},
                    "y": {"type": "number"},
                    "z": {"type": "number"},
                    "speed": {"type": "number", "description": "Speed 0.0-1.0"}
                },
                "required": ["x", "y", "z"]
            }
        }
    },
    {"type": "function", "function": {"name": "open_gripper", "description": "Open gripper", "parameters": {}}},
    {"type": "function", "function": {"name": "close_gripper", "description": "Close gripper to grasp", "parameters": {}}},
    {"type": "function", "function": {"name": "detect_object", "description": "Find object position by name",
        "parameters": {"type": "object", "properties": {"object_name": {"type": "string"}}}}}
]

class LLMRobotPlanner:
    def __init__(self, robot_executor):
        self.client = openai.OpenAI()
        self.robot = robot_executor
        self.system_prompt = """
You are a robot arm controller. When given a task in natural language,
decompose it into primitive robot actions and execute them step-by-step.
Always detect object position before attempting to grasp.
Current robot workspace: 0.3m to 0.8m in front, ±0.4m left/right.
"""
    
    def execute_task(self, task_description):
        messages = [
            {"role": "system", "content": self.system_prompt},
            {"role": "user",   "content": task_description}
        ]
        
        while True:
            response = self.client.chat.completions.create(
                model="gpt-4o", messages=messages,
                tools=ROBOT_TOOLS, tool_choice="auto"
            )
            msg = response.choices[0].message
            
            if not msg.tool_calls:
                print(f"Task complete: {msg.content}")
                break
            
            # Eksekusi setiap tool call di robot fisik
            for tc in msg.tool_calls:
                result = self.dispatch(tc.function.name,
                           json.loads(tc.function.arguments))
                messages.append({
                    "role": "tool",
                    "tool_call_id": tc.id,
                    "content": json.dumps(result)
                })
📌 Catatan Arsitektur
Vision-Language-Action (VLA) model seperti RT-2 menggabungkan ketiga kapabilitas dalam satu neural network: memahami gambar kamera, menginterpretasi perintah bahasa, dan langsung menghasilkan joint position target. Ini adalah frontier terkini robotika 2024–2025.
03
Navigasi & Persepsi

SLAM

SLAM (Simultaneous Localization and Mapping) memecahkan masalah ayam-dan-telur: robot perlu peta untuk navigasi, tapi perlu mengetahui posisinya untuk membuat peta. SLAM mengestimasi keduanya secara bersamaan.

Formulasi Probabilistik SLAM

Diberikan: u1:t = kontrol/odometri (gerakan robot) z1:t = observasi sensor (landmark, titik LiDAR) Estimasi: x1:t = trajectory robot (pose: posisi + orientasi) m = peta (posisi landmark atau occupancy grid) Posterior: p(x1:t, m | z1:t, u1:t) — distribusi probabilitas joint Dengan Bayes: p(x1:t, m | z1:t, u1:t) ∝ p(zt|xt, m) · p(xt|xt-1, ut) · p(x1:t-1, m | z1:t-1, u1:t-1) ↑ observation model ↑ motion model ↑ prior

EKF-SLAM (Extended Kalman Filter)

Asumsi: gerak dan observasi linear Gaussian. State vektor = [pose robot + posisi semua landmark].

State vektor: μ = [x, y, θ, m1x, m1y, m2x, m2y, ..., mNx, mNy] dimensi: 3 + 2N (N = jumlah landmark) Matriks kovariansi: Σ ∈ ℝ(3+2N)×(3+2N) Predict step: μ̄t = g(μt-1, ut) ← motion model nonlinear Σ̄t = Gt Σt-1 Gtᵀ + Rt ← R = noise odometri Update step (per landmark i): Kt = Σ̄t Hiᵀ (Hi Σ̄t Hiᵀ + Qt)⁻¹ ← Kalman Gain μt = μ̄t + Kt(zt,i − ẑt,i) ← update dengan observasi Σt = (I − Kt Hi) Σ̄t ← update kovariansi

Graph-based SLAM (Backend Modern)

Formulasikan SLAM sebagai graph optimization problem. Node = pose robot, edge = constraint dari odometri atau loop closure.

Minimisasi: F(x) = Σi,j eᵀij Ωij eij dimana: eij = h(xi, xj) − z̃ij (residual: prediksi vs observasi) Ωij = information matrix (inverse covariance) Solusi: Gauss-Newton atau Levenberg-Marquardt (JᵀΩJ) Δx = −Jᵀ Ω e H Δx = −b → sparse linear system (g2o, GTSAM)

Visual SLAM — ORB-SLAM3

# Integrasi ORB-SLAM3 dengan Python via ROS2
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image, CameraInfo
from nav_msgs.msg import Odometry, OccupancyGrid
from geometry_msgs.msg import PoseStamped
import cv2, numpy as np
from cv_bridge import CvBridge

class SLAMNode(Node):
    def __init__(self):
        super().__init__('slam_node')
        self.bridge = CvBridge()
        
        # Subscribe ke kamera
        self.create_subscription(Image, '/camera/image_raw',
                                  self.image_callback, 10)
        self.create_subscription(CameraInfo, '/camera/camera_info',
                                  self.info_callback, 1)
        
        # Publish pose estimasi
        self.pose_pub = self.create_publisher(PoseStamped,
                                               '/slam/pose', 10)
        
        # ORB feature detector
        self.orb = cv2.ORB_create(nfeatures=1000)
        self.matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
        self.prev_frame = None
        self.prev_kp = None
        self.prev_desc = None
        self.camera_matrix = None
    
    def image_callback(self, msg):
        frame = self.bridge.imgmsg_to_cv2(msg, 'bgr8')
        gray  = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        kp, desc = self.orb.detectAndCompute(gray, None)
        
        if self.prev_desc is not None and desc is not None:
            matches = self.matcher.match(self.prev_desc, desc)
            matches = sorted(matches, key=lambda x: x.distance)[:50]
            
            if len(matches) >= 8:
                pts1 = np.float32([self.prev_kp[m.queryIdx].pt for m in matches])
                pts2 = np.float32([kp[m.trainIdx].pt for m in matches])
                
                # Essential matrix → recover pose R, t
                E, mask = cv2.findEssentialMat(pts1, pts2,
                              self.camera_matrix, cv2.RANSAC, 0.999, 1.0)
                _, R, t, _ = cv2.recoverPose(E, pts1, pts2,
                                              self.camera_matrix)
                self.publish_pose(R, t, msg.header.stamp)
        
        self.prev_frame = gray
        self.prev_kp = kp
        self.prev_desc = desc

LiDAR SLAM — Cartographer & LOAM

AlgoritmaBackendSensorKeunggulan
CartographerGraph SLAM (Ceres)2D/3D LiDARGoogle, produksi-grade, ROS2 support
LOAMIterative closest point3D LiDAR (Velodyne)Real-time, edge+planar feature matching
LIO-SAMFactor graph (GTSAM)LiDAR + IMUTightly-coupled, loop closure robust
KISS-ICPICP + adaptive thresholdAny LiDARSimpel, robust, 2023 ICRA
RTAB-MapBag-of-words + graphRGB-D / LiDARLong-term mapping, online loop closure
04
Kinematika

INVERSE
KINEMATICS

Inverse Kinematics (IK) menjawab pertanyaan: "Berapa sudut setiap joint agar end-effector mencapai pose target?" — kebalikan dari Forward Kinematics yang menghitung pose dari sudut joint.

Forward Kinematics — Fondasi IK

Denavit-Hartenberg (DH) Parameters

DH convention mendefinisikan 4 parameter per joint untuk mendeskripsikan transformasi antar link:

θᵢ (theta)
Sudut rotasi di sekitar sumbu Zᵢ₋₁ (joint angle — variabel untuk revolute joint)
dᵢ (d)
Translasi sepanjang sumbu Zᵢ₋₁ (variabel untuk prismatic joint)
aᵢ (a / link length)
Translasi sepanjang sumbu Xᵢ (jarak antara dua sumbu Z berurutan)
αᵢ (alpha / twist)
Sudut rotasi di sekitar sumbu Xᵢ (sudut antara dua sumbu Z berurutan)

Matriks Transformasi DH

ᵢ₋₁Tᵢ = Rz(θᵢ) · Tz(dᵢ) · Tx(aᵢ) · Rx(αᵢ) ⎡ cos θᵢ -sin θᵢ cos αᵢ sin θᵢ sin αᵢ aᵢ cos θᵢ ⎤ ⎢ sin θᵢ cos θᵢ cos αᵢ -cos θᵢ sin αᵢ aᵢ sin θᵢ ⎥ ⎢ 0 sin αᵢ cos αᵢ dᵢ ⎥ ⎣ 0 0 0 1 ⎦ Forward Kinematics (6-DOF): ⁰T₆ = ⁰T₁ · ¹T₂ · ²T₃ · ³T₄ · ⁴T₅ · ⁵T₆ Posisi end-effector: p = ⁰T₆[0:3, 3] Orientasi: R = ⁰T₆[0:3, 0:3]

Metode Analytical IK (Closed-form)

Untuk robot 6-DOF dengan konfigurasi Pieper (tiga sumbu terakhir bertemu di satu titik), solusi analitik tertutup ada. Contoh robot UR5:

Langkah 1 — Joint 1 (θ₁): Dari posisi wrist center (wc): wc = p_target - d₆ · R_target · [0, 0, 1]ᵀ θ₁ = atan2(wc_y, wc_x) (dua solusi: ± atan2) Langkah 2 — Joint 3 (θ₃) dari law of cosines: D = (r² + s² - a₂² - a₃²) / (2·a₂·a₃) θ₃ = atan2(±√(1 - D²), D) (elbow up / elbow down) dimana: r = √(wc_x² + wc_y²) - a₁ s = wc_z - d₁ Langkah 3 — Joint 2 (θ₂): θ₂ = atan2(s, r) - atan2(a₃·sin θ₃, a₂ + a₃·cos θ₃) Langkah 4-6 — Joints 4,5,6 (Euler angles dari R₃₆): R₃₆ = (⁰T₃)⁻¹ · R_target → ZYZ Euler decomposition untuk θ₄, θ₅, θ₆

Metode Numerical — Jacobian Inverse

Untuk konfigurasi robot arbitrer atau redundant, gunakan pendekatan iteratif berbasis Jacobian:

Jacobian J(q) ∈ ℝ⁶ˣⁿ memetakan kecepatan joint → kecepatan end-effector: ẋ = J(q) · q̇ Kolom Jacobian untuk revolute joint i: Jᵥᵢ = zᵢ₋₁ × (p_e - pᵢ₋₁) ← linear velocity Jacobian Jωᵢ = zᵢ₋₁ ← angular velocity Jacobian Dimana: zᵢ₋₁ = kolom ke-3 dari ⁰Tᵢ₋₁ (sumbu rotasi joint i dalam frame world) p_e = posisi end-effector pᵢ₋₁ = posisi origin frame i-1 Update joint angles (Jacobian Transpose method): Δq = Jᵀ · Δx ← sederhana, selalu stabil Δq = J⁺ · Δx ← pseudoinverse, lebih presisi Δq = (JᵀJ + λ²I)⁻¹ Jᵀ Δx ← Damped Least Squares (DLS), singularity robust J⁺ = Jᵀ(JJᵀ)⁻¹ ← Moore-Penrose pseudoinverse (non-singular) Iterasi: q_{k+1} = q_k + α · Δq_k while ||x_target - FK(q)|| > tolerance: iterate

Implementasi Python IK — 6DOF Robot

import numpy as np

class RobotArm6DOF:
    """
    6-DOF robot arm IK solver.
    DH parameters: [a, alpha, d, theta_offset]
    """
    def __init__(self, dh_params):
        # dh_params: list of [a, alpha, d, theta_offset] per joint
        self.dh = np.array(dh_params)
    
    def dh_matrix(self, a, alpha, d, theta):
        """Matriks transformasi DH 4x4"""
        ct, st = np.cos(theta), np.sin(theta)
        ca, sa = np.cos(alpha), np.sin(alpha)
        return np.array([
            [ct,  -st*ca,   st*sa,  a*ct],
            [st,   ct*ca,  -ct*sa,  a*st],
            [ 0,      sa,      ca,     d],
            [ 0,       0,       0,     1]
        ])
    
    def forward_kinematics(self, q):
        """FK: hitung pose end-effector dari joint angles q"""
        T = np.eye(4)
        for i, (a, alpha, d, theta_off) in enumerate(self.dh):
            Ti = self.dh_matrix(a, alpha, d, q[i] + theta_off)
            T  = T @ Ti
        return T  # 4x4 homogeneous transform
    
    def jacobian(self, q, eps=1e-6):
        """Numerical Jacobian via finite differences"""
        T0 = self.forward_kinematics(q)
        p0 = T0[:3, 3]
        n  = len(q)
        J  = np.zeros((6, n))
        
        for i in range(n):
            q_eps = q.copy()
            q_eps[i] += eps
            Ti   = self.forward_kinematics(q_eps)
            pi   = Ti[:3, 3]
            # Linear velocity part
            J[:3, i] = (pi - p0) / eps
            # Angular velocity part (from rotation matrix change)
            dR = (Ti[:3,:3] - T0[:3,:3]) / eps
            # Extract angular velocity (skew-symmetric part)
            J[3, i] = dR[2, 1]
            J[4, i] = dR[0, 2]
            J[5, i] = dR[1, 0]
        return J
    
    def ik_dls(self, target_pos, target_R=None, q_init=None,
              max_iter=200, tol=1e-4, lam=0.01):
        """
        Damped Least Squares IK solver.
        target_pos: [x, y, z] target position
        target_R:   3x3 target rotation (optional)
        Returns: q (joint angles), converged (bool)
        """
        q = q_init if q_init is not None else np.zeros(len(self.dh))
        
        for iteration in range(max_iter):
            T = self.forward_kinematics(q)
            pos_err = target_pos - T[:3, 3]
            
            if target_R is not None:
                R_err = target_R @ T[:3,:3].T
                angle = np.arccos(np.clip((np.trace(R_err) - 1) / 2, -1, 1))
                if angle > 1e-8:
                    ax = np.array([R_err[2,1]-R_err[1,2],
                                   R_err[0,2]-R_err[2,0],
                                   R_err[1,0]-R_err[0,1]]) / (2 * np.sin(angle))
                    rot_err = angle * ax
                else:
                    rot_err = np.zeros(3)
                dx = np.concatenate([pos_err, rot_err])
            else:
                dx = pos_err
            
            if np.linalg.norm(pos_err) < tol:
                return q, True
            
            J = self.jacobian(q)
            if target_R is None:
                J = J[:3]  # position only
            
            # DLS: Δq = Jᵀ(JJᵀ + λ²I)⁻¹ Δx
            JJt = J @ J.T
            dq  = J.T @ np.linalg.solve(JJt + lam**2 * np.eye(J.shape[0]), dx)
            q   = q + dq
            
            # Joint limits clipping (contoh ±180°)
            q = np.clip(q, -np.pi, np.pi)
        
        return q, False

# Contoh: UR5 DH parameters [a, alpha, d, theta_offset]
ur5_dh = [
    [0,      np.pi/2,  0.089,  0],
    [-0.425, 0,        0,       0],
    [-0.392, 0,        0,       0],
    [0,      np.pi/2,  0.109,  0],
    [0,     -np.pi/2,  0.095,  0],
    [0,      0,        0.082,  0],
]

robot = RobotArm6DOF(ur5_dh)
target = np.array([0.5, 0.2, 0.3])
q_sol, ok = robot.ik_dls(target)
print(f"IK {'converged' if ok else 'failed'}: {np.degrees(q_sol).round(2)}°")

Metode Modern: FABRIK & Neural IK

MetodeKompleksitasSingularityUse Case
Analytical (closed-form)O(1)Perlu penanganan manualRobot standar 6-DOF, real-time kontrol
Jacobian TransposeO(n²)Bisa terjebakSederhana, cukup untuk banyak kasus
Damped Least SquaresO(n³)Robust via dampingGeneral, produksi-grade
FABRIKO(n)N/A (geometric)Animasi, karakter, chain panjang
Neural IK (learned)O(1) inferenceDilearnedNon-standard kinematics, deformable
CCD (Cyclic Coord Descent)O(n)N/ARedundant arm, obstacle avoidance
05
Hardware

ELECTRONICS
FOR ROBOTICS

Microcontrollers & Compute Platforms

PlatformCPU/MCURAMGPIOPenggunaan
Arduino MegaATmega2560, 16MHz8KB54Motor control, sensor fusion dasar
ESP32-S3Xtensa LX7 dual, 240MHz512KB+8MB36WiFi/BT robot, edge AI (TFLite)
STM32F4Cortex-M4, 168MHz192KB50+Real-time motor control, FOC
Raspberry Pi 5Cortex-A76, 2.4GHz4–8GB40ROS2 host, vision processing
NVIDIA Jetson NanoCortex-A57 + 128-core GPU4GB40Deep learning inference di robot
NVIDIA Jetson Orin NX12-core + 1024 CUDA16GB40Real-time vision + SLAM + DNN
BeagleBone AI-64Cortex-A72 + DSP + MMA4GB80+Industrial robot, PRU real-time

Aktuator — Motor dan Driver

Jenis Motor

🔄
DC Motor Brushed
Murah, mudah dikontrol. Driver: L298N, TB6612. Untuk roda mobile robot, prototype.
BLDC (Brushless DC)
Efisiensi tinggi, torsi besar. Driver: ESC atau FOC controller (ODrive, VESC). Untuk drone dan legged robot.
🎯
Servo
Posisi presisi built-in. PWM 50Hz, pulse 1–2ms. Smart servo: Dynamixel XH540 (protocol RS485).
🔩
Stepper
Open-loop posisi akurat. Driver: A4988, TMC2209. Untuk CNC, 3D printer, gripper mekanik.

FOC (Field Oriented Control) — Motor BLDC Modern

Clarke Transform (3-phase → 2-phase stationary α-β): Iα = Ia Iβ = (Ia + 2·Ib) / √3 Park Transform (α-β stationary → d-q rotating): Id = Iα·cos(θ) + Iβ·sin(θ) Iq = -Iα·sin(θ) + Iβ·cos(θ) Id → flux control (target: 0 untuk PMSM) Iq → torque control (proporsional dengan torque) PI Controller per axis: Vd = Kp·(Id_ref - Id) + Ki·∫(Id_ref - Id)dt Vq = Kp·(Iq_ref - Iq) + Ki·∫(Iq_ref - Iq)dt Inverse Park + Inverse Clarke → SVPWM → 3-phase output

Sensor Suite untuk Robot

SensorOutputInterfaceKegunaan
IMU (MPU-6050, ICM-42688)6/9-axis accel+gyro+magI²C/SPIOrientasi, odometri, stabilisasi
Encoder QuadraturePulsa A/B, resolusi 1000–4096 PPRGPIO interruptPosisi dan kecepatan motor
LiDAR 2D (RPLiDAR A3)360° scan, 0.15–25mUART/USBSLAM 2D, obstacle avoidance
LiDAR 3D (Livox Mid-360)Point cloud, 40m rangeEthernetSLAM 3D, object detection
RGB-D (Intel RealSense)Color + depth 1280×720USB3Visual SLAM, grasp planning
Force/Torque (ATI Nano17)6-axis F/T, ±70NSPI/analogKontrol gaya, deteksi kontak
TOF Array (VL53L5CX)8×8 depth matrixI²CProximity, gesture, obstacle

Power Management untuk Robot

/* Power distribution architecture untuk mobile robot */

/* Battery: 4S LiPo (14.8V nominal, 16.8V full, 12V cutoff) */
/* atau 6S untuk robot yang lebih besar */

/*
  BATT (14.8V) ──┬──> FUSE (30A) ──> Motor Driver (BLDC/DC Motor)
                 │
                 ├──> Voltage Regulator 12V ──> Servo Bus
                 │      (LM2596 Buck, ≥5A)
                 │
                 ├──> Voltage Regulator 5V  ──> Raspberry Pi
                 │      (MP2315 Buck, 3A)        GPIO Sensors
                 │
                 └──> Voltage Regulator 3.3V ──> STM32 / Sensors
                        (AMS1117 atau LDO)

  Battery Monitor: INA226 (I2C) untuk current sensing
  Balancer: BMS integrated untuk proteksi overcharge/discharge
*/

// Arduino power monitoring
#include <Wire.h>
#include <Adafruit_INA226.h>

Adafruit_INA226 ina226;

void setup() {
    Wire.begin();
    ina226.begin();
    ina226.setShunt(0.01, 20.0);  // 10mΩ shunt, 20A max
    ina226.setAveragingCount(INA226_COUNT_64);
}

void loop() {
    float voltage = ina226.readBusVoltage();   // Volt
    float current = ina226.readShuntCurrent();  // Ampere
    float power   = ina226.readPower();         // Watt
    
    if (voltage < 12.0) {
        emergency_shutdown();  // LiPo protection
    }
}

Bus Komunikasi untuk Robot

ProtocolSpeedTopologiPenggunaan Robotika
CAN Bus / CAN FD1 Mbps / 8 MbpsMulti-drop busMotor controller, industri (robot arm komersial)
EtherCAT100 MbpsRing/daisy chainReal-time motion control, servo industri
RS-485 (Modbus/Dynamixel)3 MbpsMulti-dropSmart servo Dynamixel, sensor industri
SPI40+ MHzPoint-to-pointIMU, encoder, sensor kecepatan tinggi
I²C400 KHz / 3.4 MHzMulti-master/slaveSensor I²C (ToF, IMU, pressure)
UART/USART115200–3M bpsPoint-to-pointGPS, LiDAR, debug, ROS serial bridge
06
Hardware

MECHANICAL
ENGINEERING

Analisis Statis Link Robot

Beam Theory — Defleksi dan Stress

Euler-Bernoulli Beam (untuk link robot horizontal dengan beban): EI (d⁴w/dx⁴) = q(x) E = Young's Modulus material (GPa) I = Second moment of area (momen inersia penampang) w = defleksi (m) q = beban terdistribusi (N/m) Untuk penampang persegi (b×h): I = bh³/12 Untuk penampang pipa (diameter D, ketebalan t): I = π(D⁴ - (D-2t)⁴)/64 Defleksi maksimum cantilever (beban terpusat P di ujung): δ_max = PL³ / (3EI) Tegangan bending maksimum: σ_max = M·c / I = PL·(h/2) / I ≤ σ_yield / FOS FOS (Factor of Safety) = 2.0–4.0 untuk robot

Material untuk Struktur Robot

MaterialDensity (g/cm³)Yield (MPa)E (GPa)Kelebihan
Aluminium 6061-T62.727668.9Ringan, machineable, umum
Aluminium 7075-T62.8150371.7Lebih kuat, aerospace grade
Steel 41407.85655200Sangat kuat, murah, berat
Titanium Grade 54.43880113.8Kuat+ringan, biocompatible
Carbon Fiber (CFRP)1.6600+70–200Sangat ringan, anisotropik
PETG (3D print)1.27502.1Prototip cepat, tidak brittle
Nylon PA121.01481.7Tahan benturan, fleksibel

Gear Train dan Reduksi Kecepatan

Gear ratio: N = ω_input / ω_output = T_output / T_input Motor BLDC: 5000 rpm, 0.5 Nm Target output: 25 rpm, 100 Nm Required ratio: N = 5000/25 = 200:1 Jenis reducer untuk robot: 1. Harmonic Drive (Strain Wave Gear): - Ratio: 50:1 – 320:1 dalam satu stage - Backlash: hampir nol (<1 arcmin) - Efisiensi: 65–80% - Digunakan: joint arm presisi tinggi 2. Planetary Gearbox: - Ratio: 3:1 – 100:1 per stage - Efisiensi: 97–99% - Backlash: 5–20 arcmin (standard) - Digunakan: mobile robot, drone arm 3. Cycloidal Drive: - Ratio: 6:1 – 119:1 single stage - Torque density: sangat tinggi - Backlash: hampir nol - Digunakan: legged robot (ANYdrive) Torque density comparison: σ_torque = T_rated / volume → higher is better Harmonic Drive > Cycloidal > Planetary > Worm Gear

Desain Joint Robot

Tentukan Torque Requirement
Hitung beban statis (gravitasi) + dinamis (accelerasi link, payload). Gunakan Newton-Euler recursive algorithm: T = I·α + ω×(I·ω) + r×F_grav
Pilih Motor + Reducer
Motor torque × gear ratio × efisiensi ≥ required torque × FOS. Cek rated speed, peak torque, duty cycle. Pertimbangkan back-drivability untuk cobots.
Desain Bearing Arrangement
Gunakan crossed roller bearing atau angular contact bearing untuk menahan momen bending + thrust. Hitung L10 bearing life: L10 = (C/P)³ × 10⁶ rev.
FEA Verification
Run static + modal analysis di FreeCAD FEM, Fusion 360, atau ANSYS. Cek factor of safety ≥ 2, natural frequency tidak berdekatan dengan control frequency.

3D Printing untuk Prototip Robot

TeknologiMaterialToleransiPenggunaan Robotika
FDM (Bambu Lab, Prusa)PLA/PETG/ABS/Nylon/CF±0.2 mmHousing, bracket, prototip cepat
SLA/MSLA (Elegoo)Resin standar/ABS-like±0.05 mmPart halus, gear kecil, gripper fingertip
SLS (industrial)Nylon PA12/PA11±0.1 mmStructural part isotropik, no support
MJF (HP)Nylon PA12±0.08 mmProduksi batch kecil, end-use parts
Metal SLMTi64, AlSi10Mg, 316L±0.05 mmJoint housing, flanges, high-stress parts
07
Software

PYTHON
PROGRAMMING

ROS2 dengan Python — Fondasi

#!/usr/bin/env python3
"""ROS2 Node template untuk robot controller"""
import rclpy
from rclpy.node import Node
from rclpy.action import ActionServer, ActionClient
from rclpy.callback_groups import ReentrantCallbackGroup
from std_msgs.msg import Float64MultiArray
from sensor_msgs.msg import JointState
from geometry_msgs.msg import PoseStamped
from trajectory_msgs.msg import JointTrajectory, JointTrajectoryPoint
import numpy as np
from builtin_interfaces.msg import Duration

class RobotController(Node):
    def __init__(self):
        super().__init__('robot_controller')
        
        # Callback group untuk concurrent callbacks
        self.cb_group = ReentrantCallbackGroup()
        
        # Publishers
        self.joint_cmd_pub = self.create_publisher(
            JointTrajectory,
            '/joint_trajectory_controller/joint_trajectory',
            10
        )
        
        # Subscribers
        self.joint_state_sub = self.create_subscription(
            JointState, '/joint_states',
            self.joint_state_callback, 10,
            callback_group=self.cb_group
        )
        
        # Service client untuk IK (MoveIt2)
        from moveit_msgs.srv import GetPositionIK
        self.ik_client = self.create_client(
            GetPositionIK, '/compute_ik'
        )
        
        self.current_joints = None
        self.get_logger().info('Robot Controller initialized')
    
    def joint_state_callback(self, msg):
        self.current_joints = {
            name: pos for name, pos
            in zip(msg.name, msg.position)
        }
    
    async def move_to_pose(self, target_pose: PoseStamped):
        """Kirim target pose, dapatkan IK, eksekusi trajectory"""
        from moveit_msgs.srv import GetPositionIK
        from moveit_msgs.msg import PositionIKRequest
        
        req = GetPositionIK.Request()
        req.ik_request = PositionIKRequest()
        req.ik_request.group_name = 'arm'
        req.ik_request.pose_stamped = target_pose
        req.ik_request.timeout.sec = 2
        req.ik_request.attempts = 10
        
        future = self.ik_client.call_async(req)
        await future
        
        if future.result().error_code.val == 1:  # SUCCESS
            positions = future.result().solution.joint_state.position
            await self.execute_joint_trajectory(positions)
        else:
            self.get_logger().error('IK failed')
    
    async def execute_joint_trajectory(self, target_positions,
                                          duration_sec=2.0):
        traj = JointTrajectory()
        traj.joint_names = ['joint_1', 'joint_2', 'joint_3',
                            'joint_4', 'joint_5', 'joint_6']
        
        point = JointTrajectoryPoint()
        point.positions = list(target_positions)
        point.time_from_start = Duration(sec=int(duration_sec))
        traj.points = [point]
        
        self.joint_cmd_pub.publish(traj)
        self.get_logger().info(f'Executing trajectory to {target_positions}')

def main():
    rclpy.init()
    node = RobotController()
    executor = rclpy.executors.MultiThreadedExecutor()
    executor.add_node(node)
    executor.spin()

if __name__ == '__main__':
    main()

PyBullet Simulation — Physics-Based Robot Sim

import pybullet as pb
import pybullet_data
import numpy as np
import time

class RobotSimulation:
    def __init__(self, urdf_path, gui=True):
        self.client = pb.connect(pb.GUI if gui else pb.DIRECT)
        pb.setAdditionalSearchPath(pybullet_data.getDataPath())
        pb.setGravity(0, 0, -9.81)
        
        # Load ground plane dan robot
        self.plane = pb.loadURDF('plane.urdf')
        self.robot = pb.loadURDF(urdf_path,
                                  basePosition=[0, 0, 0],
                                  useFixedBase=True)
        
        # Identifikasi joint indices
        self.num_joints = pb.getNumJoints(self.robot)
        self.joint_ids  = []
        for i in range(self.num_joints):
            info = pb.getJointInfo(self.robot, i)
            if info[2] == pb.JOINT_REVOLUTE:
                self.joint_ids.append(i)
        
        # Simulation timestep
        pb.setTimeStep(1/240)
    
    def set_joint_positions(self, q):
        for jid, angle in zip(self.joint_ids, q):
            pb.setJointMotorControl2(
                self.robot, jid,
                controlMode=pb.POSITION_CONTROL,
                targetPosition=angle,
                force=300,        # max force (N)
                maxVelocity=2.0   # rad/s
            )
    
    def ik_pybullet(self, ee_link_id, target_pos, target_orn=None):
        """Gunakan IK solver built-in PyBullet"""
        if target_orn is not None:
            q = pb.calculateInverseKinematics(
                self.robot, ee_link_id,
                target_pos, target_orn,
                maxNumIterations=100,
                residualThreshold=1e-5
            )
        else:
            q = pb.calculateInverseKinematics(
                self.robot, ee_link_id, target_pos,
                maxNumIterations=100
            )
        return np.array(q)
    
    def step(self, render=True):
        pb.stepSimulation()
        if render:
            time.sleep(1/240)
    
    def get_ee_pose(self, ee_link_id):
        state = pb.getLinkState(self.robot, ee_link_id)
        pos = np.array(state[4])  # world position
        orn = np.array(state[5])  # world orientation (quaternion)
        return pos, orn

# Contoh penggunaan
sim = RobotSimulation('kuka_iiwa/model.urdf')
target = [0.5, 0.0, 0.6]
q_sol = sim.ik_pybullet(6, target)
sim.set_joint_positions(q_sol)

for _ in range(500):
    sim.step()

Key Libraries Ekosistem Python Robotika

LibraryFungsiInstall
rclpyROS2 Python clientapt install ros-humble-rclpy
numpy / scipyKomputasi numerik, transformasipip install numpy scipy
roboticstoolbox-pythonDH, FK/IK, trajectory, simulatorpip install roboticstoolbox-python
spatialmath-pythonSO3, SE3, quaternionpip install spatialmath-python
OpenCV (cv2)Computer vision, kamerapip install opencv-python
PyBulletPhysics simulationpip install pybullet
MuJoCo (mujoco-py)High-fidelity physics simpip install mujoco
PyTorch / TFDeep learningpip install torch
stable-baselines3RL algorithms (PPO, SAC, TD3)pip install stable-baselines3
ikpyPure Python IK librarypip install ikpy
open3dPoint cloud processingpip install open3d
pyserialSerial communication (Arduino)pip install pyserial
08
Keamanan

OFFENSIVE
SECURITY

⚠️ Disclaimer
Materi ini ditujukan untuk security research, penetration testing dengan izin resmi, dan pengembangan sistem pertahanan robot. Penggunaan teknik ini tanpa otorisasi adalah ilegal.

Attack Surface Robot Modern

📡
Network Layer
ROS2 DDS, WiFi control, Ethernet, REST/WebSocket API, teleoperation protocols.
🔧
Firmware / OS
MCU firmware, Linux RT kernel, RTOS, bootloader, OTA update mechanism.
🚌
Bus Komunikasi
CAN bus injection, I²C sniffing, UART debug port, JTAG/SWD access.
🧠
AI Model
Adversarial examples, data poisoning, model extraction, prompt injection pada LLM robot.

ROS2 Security Assessment

ROS2 Threat Model

ROS2 menggunakan DDS (Data Distribution Service) sebagai middleware. Secara default, DDS tidak terenkripsi dan tidak ada autentikasi antara node.

Vektor SeranganDeskripsiDampakMitigation
Topic SpoofingNode jahat publish ke /cmd_vel atau /joint_commandsKendali penuh robotSROS2, DDS security plugins
Topic EavesdroppingSubscribe ke /camera/image_raw, /map, sensor dataExfiltrasi data sensorEnkripsi DDS (TLS)
Node InjectionJalankan ROS2 node berbahaya di dalam jaringanMitM, replay attackNamespace access control
Parameter Server AttackModifikasi parameter via /robot/set_parametersPerubahan konfigurasiParameter whitelist, auth
Service Call AbusePanggil /emergency_stop, /reset, /load_controllerDoS, unsafe stateService ACL
# ROS2 Security Assessment Tools
# ================================

# 1. Enumerasi semua topic aktif di jaringan ROS2
$ ros2 topic list -v
$ ros2 topic echo /cmd_vel        # intercept velocity command
$ ros2 topic echo /joint_states   # sniff joint positions

# 2. Enumerasi service dan parameter
$ ros2 service list
$ ros2 param list
$ ros2 param get /robot_driver motor_kp

# 3. Node graph untuk attack surface mapping
$ ros2 node list
$ rqt_graph   # visual pub/sub graph

# 4. Publish berbahaya ke /cmd_vel (dengan izin pentest)
$ ros2 topic pub /cmd_vel geometry_msgs/msg/Twist \
  "{linear: {x: 0.5}, angular: {z: 0.3}}"

# 5. SROS2 — Mengaktifkan security
$ ros2 security create_keystore ~/sros2_keystore
$ ros2 security create_key ~/sros2_keystore /talker
$ ros2 security create_permission ~/sros2_keystore /talker \
  ~/sros2_keystore/enclaves/talker/policy.xml
$ export ROS_SECURITY_KEYSTORE=~/sros2_keystore
$ export ROS_SECURITY_ENABLE=true
$ export ROS_SECURITY_STRATEGY=Enforce

Firmware Analysis — Robot MCU

# Binary analysis firmware STM32 / ESP32
# Asumsi: dapat ELF/BIN via JTAG extraction atau OTA dump

# 1. Static analysis dengan Ghidra / radare2
$ r2 -A firmware.bin
[0x00000000]> afl         # list semua fungsi
[0x00000000]> pdf @sym.uart_receive_cmd  # disassemble command handler
[0x00000000]> axt @str.motor_speed      # cross reference string

# 2. Cari hardcoded credentials / keys
$ strings firmware.bin | grep -i -E "pass|key|secret|token|wifi|ssid"
$ binwalk -e firmware.bin   # extract embedded filesystems

# 3. UART debug port sniffing
# Probe TX/RX pins dengan logic analyzer / FTDI
$ screen /dev/ttyUSB0 115200   # atau picocom, minicom

# 4. JTAG/SWD access untuk memory dump (OpenOCD)
$ openocd -f interface/stlink.cfg -f target/stm32f4x.cfg
(gdb) target extended-remote :3333
(gdb) monitor reset halt
(gdb) dump binary memory dump.bin 0x08000000 0x080FFFFF  # flash dump
(gdb) dump binary memory sram.bin 0x20000000 0x2001FFFF  # SRAM dump

CAN Bus Attack — Industri & Automotive Robot

# CAN Bus attack dengan SocketCAN (Linux)

# Setup CAN interface (hardware: CAN analyzer USB)
$ ip link set can0 type can bitrate 1000000
$ ip link set up can0

# Monitoring traffic
$ candump can0              # dump semua frame
$ cansniffer can0           # real-time monitoring dengan diff highlight

# Injeksi CAN frame (perlu otorisasi pentest)
$ cansend can0 123#DEADBEEF01020304   # ID=0x123, data=8 bytes

# Python CAN injection
import can

bus = can.Bus(interface='socketcan', channel='can0')

# Sniff 100 frame
for i in range(100):
    msg = bus.recv(timeout=1.0)
    if msg:
        print(f"ID: 0x{msg.arbitration_id:03X} Data: {msg.data.hex()}")

# Fuzzing frame ID tertentu
import random
for _ in range(1000):
    fuzz_data = bytes([random.randint(0, 255) for _ in range(8)])
    msg = can.Message(arbitration_id=0x200, data=fuzz_data)
    bus.send(msg)

Adversarial Attack pada Robot Perception

# FGSM (Fast Gradient Sign Method) — adversarial example
import torch
import torch.nn.functional as F

def fgsm_attack(model, image, true_label, epsilon=0.03):
    """
    Generate adversarial image yang visually mirip asli
    tapi menyebabkan misclassification.
    Misal: robot stop sign → speed limit sign detection confusion.
    """
    image.requires_grad = True
    output = model(image)
    loss = F.cross_entropy(output, true_label)
    model.zero_grad()
    loss.backward()
    
    # Perturbasi dalam arah gradient
    sign_grad = image.grad.data.sign()
    adv_image = image + epsilon * sign_grad
    adv_image = torch.clamp(adv_image, 0, 1)  # valid pixel range
    
    return adv_image

# Physical adversarial: patch pada objek nyata
# Adversarial sticker yang dicetak dan ditempel pada objek
# menyebabkan robot vision model salah klasifikasi
# → bahaya untuk safety-critical systems

Robot Security Hardening Checklist

Network Segmentation
Isolasi robot network dari IT network. VLAN terpisah. Firewall rules: whitelist port yang diperlukan saja. Disable ROS2 multicast di luar subnet.
Aktifkan SROS2 (Secure ROS2)
Aktifkan DDS security: enkripsi RTPS, autentikasi X.509, access control per topic. Enforce policy untuk setiap node.
Secure Boot + Code Signing
Pastikan firmware di-sign. STM32: aktifkan RDP level 2. Verifikasi OTA update signature sebelum flash.
Physical Security
Epoxy JTAG pins setelah production. Disable UART debug di firmware release. Tamper-evident hardware.
Anomaly Detection
Monitor deviasi perilaku: kecepatan joint abnormal, perintah di luar workspace, CAN frame ID baru. Trigger E-stop otomatis.
09
Produksi

MANUFACTURING
ROBOT PRODUK

Stage 1 — Desain untuk Manufaktur (DFM)

Design for Manufacturing Principles

Minimasi Jumlah Part
Setiap part tambahan = biaya assembly + failure point + inventory cost. Konsolidasi: bracket + mount jadi satu part mesin.
Standardisasi Fastener
Gunakan M3, M4, M5 saja. Satu jenis kepala (hex socket). Meminimasi tooling dan kesalahan assembly.
Toleransi Realistis
Jangan over-specify toleransi. Machined part: ±0.1mm standar, ±0.05mm precision. Setiap 0 ekstra = biaya berlipat.

Bill of Materials (BOM) Management

# BOM generator Python sederhana
import pandas as pd
import json

class RobotBOM:
    def __init__(self, product_name):
        self.product = product_name
        self.items = []
    
    def add_item(self, pn, name, category, qty, unit_cost_usd,
                 supplier, lead_time_days, notes=""):
        self.items.append({
            'PN': pn, 'Name': name, 'Category': category,
            'Qty': qty, 'Unit Cost': unit_cost_usd,
            'Total Cost': qty * unit_cost_usd,
            'Supplier': supplier, 'Lead Time': lead_time_days,
            'Notes': notes
        })
    
    def generate_report(self):
        df = pd.DataFrame(self.items)
        total = df['Total Cost'].sum()
        
        print(f"\n{'='*60}")
        print(f"BOM: {self.product}")
        print(f"Total Parts: {len(self.items)}")
        print(f"Total COGS: ${total:.2f}")
        print(df.groupby('Category')['Total Cost'].sum().sort_values(ascending=False))
        return df, total

# Contoh BOM robot arm 6-DOF
bom = RobotBOM("BlueArm-6 v1.0")
bom.add_item("MOT-001", "Dynamixel XH540-W270",   "Actuator",  6,  299,  "Robotis",     14)
bom.add_item("MCU-001", "STM32F446RE Nucleo",        "Electronics", 1, 22,   "ST/Mouser",   7)
bom.add_item("CPT-001", "Jetson Orin NX 16GB",      "Compute",    1,  499,  "NVIDIA/Arrow", 21)
bom.add_item("SEN-001", "Intel RealSense D435i",    "Sensor",     1,  179,  "Intel",       7)
bom.add_item("STR-001", "Al 6061 machined links",   "Structure",  6,  45,   "Local CNC",   10)
bom.add_item("PCB-001", "Main controller PCB",      "PCB",        1,  85,   "JLCPCB",     14)

df, total_cogs = bom.generate_report()
# Pricing: COGS × 3–5 untuk target price

PCB Design dan Manufacturing

LayerFungsiMaterial
Top CopperComponent placement, signal routing35μm Cu foil
GND PlaneGround return, EMI shielding35μm Cu full pour
Power PlanePower distribution (+5V, +3.3V)35μm Cu full pour
Bottom CopperSignal routing, test points35μm Cu foil
SubstrateDielektrik isolasiFR4 1.6mm, Tg170

PCB Design Rules untuk Robot Controller

📐 Design Rules
Trace width: Signal = 0.2mm min, Power 3A = 1.5mm, Power 10A = 4mm.
Clearance: 0.2mm min untuk signal, 0.5mm untuk high voltage.
Via: 0.8mm drill, 1.2mm pad untuk standard.
Impedance: 50Ω untuk high-speed diff pair (USB, Ethernet).
Decoupling: 100nF ceramic per IC power pin, 10μF bulk per section.

PCB Fab Service Comparison

FabrikasiMin OrderLead TimeKualitasCocok untuk
JLCPCB5 pcs, ~$23–7 hariGoodPrototip, skala kecil
PCBWay5 pcs, ~$55–10 hariVery GoodPrototip+produksi kecil
LCSC/JLCPCB+PCBATurnkey assembly10–15 hariGoodSMT assembly sekaligus
Seeed Fusion5 pcs, ~$4.97–14 hariGoodIoT/robot maker
TTM/SanminaHigh MOQ2–4 mingguExcellentProduksi massal industri

Testing dan Quality Assurance

#!/usr/bin/env python3
"""
Automated robot bring-up test sequence.
Jalankan setiap unit sebelum pengiriman ke customer.
"""
import time, logging
from dataclasses import dataclass
from typing import List, Dict

logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s [%(levelname)s] %(message)s')

@dataclass
class TestResult:
    test_name: str
    passed: bool
    value: float
    unit: str
    limit_min: float
    limit_max: float
    message: str = ""

class RobotQATest:
    def __init__(self, robot_interface, serial_number):
        self.robot = robot_interface
        self.sn = serial_number
        self.results: List[TestResult] = []
    
    def run_all(self):
        tests = [
            self.test_power_supply,
            self.test_motor_communication,
            self.test_imu_calibration,
            self.test_joint_range_of_motion,
            self.test_encoder_accuracy,
            self.test_emergency_stop,
            self.test_payload_capacity,
            self.test_repeatability,
        ]
        
        for test in tests:
            try:
                result = test()
                self.results.append(result)
                status = "✓ PASS" if result.passed else "✗ FAIL"
                logging.info(f"{status} | {result.test_name}: "
                            f"{result.value:.3f} {result.unit}")
            except Exception as e:
                logging.error(f"TEST ERROR {test.__name__}: {e}")
        
        return self.generate_certificate()
    
    def test_repeatability(self):
        """ISO 9283: positional repeatability test"""
        target = [0.5, 0.0, 0.4]
        positions = []
        
        for _ in range(30):  # 30 repetisi ISO standard
            self.robot.move_home()
            time.sleep(0.5)
            self.robot.move_to(target)
            time.sleep(0.5)
            actual = self.robot.get_ee_position()
            positions.append(actual)
        
        import numpy as np
        pos_arr = np.array(positions)
        mean_pos = pos_arr.mean(axis=0)
        errors = np.linalg.norm(pos_arr - mean_pos, axis=1)
        rp = 2 * errors.std() * 1000  # RP dalam mm (2σ)
        
        return TestResult(
            "Positional Repeatability (ISO 9283)",
            rp <= 0.1, rp, "mm", 0, 0.1,
            f"RP = {rp:.3f} mm (limit: ±0.1mm)"
        )

Certification dan Compliance

SertifikasiRegionScopeMandatory
CE MarkingEU/EEAEMC, Safety (Machinery Directive 2006/42/EC)Ya, untuk jual di EU
FCC Part 15USAEmisi RF, interferensiYa, untuk produk wireless US
SRRC / CCCChinaRF approval, safetyYa, untuk pasar China
POSTEL / SDPPIIndonesiaPerangkat telekomunikasiYa, untuk impor/jual di ID
ISO 10218InternationalSafety industrial robotsBest practice
ISO/TS 15066InternationalCollaborative robots (cobots)Best practice
IEC 61508InternationalFunctional Safety (SIL)Aplikasi safety-critical
RoHS / REACHEUHazardous substancesYa, untuk pasar EU

Supply Chain Strategy

🏭
Prototype Stage
Mouser/DigiKey untuk komponen satuan. JLCPCB untuk PCB. Local machine shop untuk mekanik. 3D printing untuk housing.
📦
Pilot Production (10–100)
LCSC untuk komponen elektronik murah. PCBWay+PCBA untuk turnkey. Negosiasi bulk dengan supplier motor.
🏗️
Mass Production (1000+)
ODM/OEM di China (Shenzhen EMS). Vertikalisasi supply chain kritis. Buffer stock komponen long lead-time (chip, motor).
🔄
Dual Sourcing
Selalu punya 2 supplier per komponen kritis. Hindari single-source dependency. Lesson dari chip shortage 2021.

Checklist Sebelum Launch Produk Robot

AreaItemStatus
HardwareDesign freeze, gerber files final
HardwareDFM review dengan fabrikator
HardwareEVT/DVT/PVT build completed
SoftwareFirmware release signed + versioned
SoftwareOTA update mechanism tested
SafetyE-stop hardware dan software tested
SafetyWorkspace limit software + hardware
QA100% automated test setiap unit
ComplianceCE / FCC / lokal certification
DocsUser manual, API docs, safety manual
SupportSpare part availability 5 tahun
Antonius - bluedragonsec.com
Blue Dragon Security Research