ROBOTIKA
KOMPREHENSIF
Dokumen referensi lengkap mencakup deep learning, LLM, SLAM, inverse kinematics, elektronika, pemrograman Python, mechanical engineering, offensive security, dan manufacturing robot. Dirancang sebagai bahan pengingatan cepat yang padat dan teknis.
ARSITEKTUR ROBOTIKA
Robot modern adalah sistem cyber-physical yang memadukan persepsi, pemrosesan, dan aktuasi. Tiga domain utama saling berinteraksi:
Taxonomi Robot
| Kategori | Contoh | DoF | Lingkungan |
|---|---|---|---|
| Manipulator Serial | UR5, KUKA KR6, Franka Emika | 6–7 | Terstruktur / industri |
| Mobile Robot Wheeled | TurtleBot, ROS Car, AGV | 2–3 | Lantai datar |
| Quadruped | Boston Dynamics Spot, ANYmal | 12+ | Uneven terrain |
| Humanoid | Atlas, Unitree H1, Figure 01 | 28–44 | Human environment |
| UAV | DJI, PX4-based, Crazyflie | 4–6 | Udara |
| Surgical | Da Vinci, MAKO | 7+ | Medis steril |
Loop Kontrol Robot
# Sense → Plan → Act (SPA) loop dasar while True: # 1. SENSE: baca sensor state = get_sensor_data() # IMU, kamera, encoder # 2. ESTIMATE: estimasi state dunia world_model = slam_update(state) # posisi, peta # 3. PLAN: rencanakan aksi goal = high_level_planner(world_model) path = motion_planner(goal, world_model) # 4. ACT: kirim ke aktuator joint_angles = inverse_kinematics(path.next_pose()) actuate(joint_angles) # 5. EVALUATE: learning / reward reward = compute_reward(state, goal) update_policy(reward)
DEEP LEARNING
FOR ROBOTICS
Deep learning memungkinkan robot mempelajari pola kompleks dari data mentah sensor — mulai dari piksel kamera hingga sinyal LiDAR — tanpa feature engineering manual.
Computer Vision untuk Robotika
Arsitektur CNN untuk Persepsi Robot
| Arsitektur | Kegunaan | Kelebihan | Tradeoff |
|---|---|---|---|
| ResNet-50/101 | Feature extraction, backbone | Pretrained ImageNet, stabil | Berat, 25M param |
| MobileNetV3 | Edge deployment (Jetson Nano) | Ringan, fast inference | Accuracy lebih rendah |
| EfficientDet | Object detection | Scalable, efisien | Kompleks training |
| YOLOv8/v11 | Real-time detection | Sangat cepat, ≤1ms | Kurang presisi kecil |
| Vision Transformer | Segmentasi, depth | Global context | Butuh banyak data |
| SAM (Segment Anything) | Segmentasi universal | Zero-shot segmentation | Berat, 632M param |
Implementasi Object Detection dengan YOLOv8
from ultralytics import YOLO import cv2 import numpy as np # Load model — gunakan yolov8n (nano) untuk edge device model = YOLO('yolov8n.pt') def detect_objects_for_robot(frame, conf_thresh=0.5): """ Deteksi objek untuk robot grasp planning. Returns: list of (bbox, class, confidence, depth_estimate) """ results = model(frame, conf=conf_thresh, verbose=False) detections = [] for r in results: for box in r.boxes: x1, y1, x2, y2 = map(int, box.xyxy[0]) cls_name = r.names[int(box.cls)] conf = float(box.conf) # Estimasi posisi 3D dari bounding box (tanpa depth camera) cx = (x1 + x2) / 2 cy = (y1 + y2) / 2 area = (x2 - x1) * (y2 - y1) # Heuristic depth: makin kecil area, makin jauh approx_depth = 1000 / (area ** 0.5 + 1e-5) detections.append({ 'bbox': (x1, y1, x2, y2), 'class': cls_name, 'confidence': conf, 'center': (cx, cy), 'depth_estimate': approx_depth }) return detections # Depth estimation dengan MiDaS import torch midas = torch.hub.load('intel-isl/MiDaS', 'MiDaS_small') midas.eval() transforms = torch.hub.load('intel-isl/MiDaS', 'transforms') transform = transforms.small_transform def estimate_depth(img_bgr): img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB) input_batch = transform(img_rgb).to('cpu') with torch.no_grad(): prediction = midas(input_batch) prediction = torch.nn.functional.interpolate( prediction.unsqueeze(1), size=img_rgb.shape[:2], mode='bicubic', align_corners=False ).squeeze() return prediction.numpy()
Reinforcement Learning untuk Robot Control
RL memungkinkan robot belajar policy optimal melalui trial-and-error dengan reward signal. Tiga paradigma utama:
PPO (Proximal Policy Optimization) — Algoritma Paling Populer
# PPO training loop sederhana untuk robot arm (PyTorch) import torch, torch.nn as nn import numpy as np class ActorCritic(nn.Module): def __init__(self, obs_dim, act_dim): super().__init__() # Shared feature extractor self.shared = nn.Sequential( nn.Linear(obs_dim, 256), nn.Tanh(), nn.Linear(256, 256), nn.Tanh(), ) # Actor: output mean dan log_std untuk continuous action self.actor_mean = nn.Linear(256, act_dim) self.actor_log_std = nn.Parameter(torch.zeros(act_dim)) # Critic: output state value V(s) self.critic = nn.Linear(256, 1) def forward(self, obs): feat = self.shared(obs) mean = self.actor_mean(feat) std = torch.exp(self.actor_log_std) dist = torch.distributions.Normal(mean, std) value = self.critic(feat) return dist, value def get_action(self, obs): dist, _ = self.forward(obs) action = dist.sample() log_prob = dist.log_prob(action).sum(-1) return action, log_prob def compute_gae(rewards, values, dones, gamma=0.99, lam=0.95): """Generalized Advantage Estimation""" advantages = np.zeros_like(rewards) last_gae = 0 for t in reversed(range(len(rewards))): delta = rewards[t] + gamma * values[t+1] * (1-dones[t]) - values[t] last_gae = delta + gamma * lam * (1-dones[t]) * last_gae advantages[t] = last_gae return advantages
Sim-to-Real Transfer
Masalah utama RL untuk robot fisik: sim-to-real gap. Solusi:
LLM UNTUK
EMBODIED AI
Large Language Model mengubah cara robot memahami perintah bahasa alami, merencanakan tugas kompleks, dan berinteraksi dengan manusia. Ini fondasi Embodied AI — robot yang "mengerti" dunia secara semantik.
Model Fondasi untuk Robotika
| Model | Organisasi | Kemampuan | Input |
|---|---|---|---|
| RT-2 | Google DeepMind | Vision-language-action, zero-shot generalization | Image + Text → Action |
| PaLM-E | Multimodal reasoning, grounding visual input | Image + Text → Plan | |
| SayCan | Google Brain | LLM + affordance model, task decomposition | Text → API calls |
| ChatGPT/GPT-4o | OpenAI | Task planning, code generation, NLU | Text + Image |
| OpenVLA | Stanford | Open-source VLA, fine-tunable | Image + Text → Action |
| π₀ (Pi Zero) | Physical Intelligence | Generalist robot policy | Multi-modal |
Arsitektur: LLM sebagai Task Planner
""" Pola: LLM → High-Level Plan → Motion Primitive Executor LLM tidak mengontrol joint langsung, tapi decompose task menjadi fungsi-fungsi primitif yang sudah diimplementasikan. """ import openai import json # Definisikan robot API primitives sebagai tools untuk LLM ROBOT_TOOLS = [ { "type": "function", "function": { "name": "move_to", "description": "Move robot end-effector to target position", "parameters": { "type": "object", "properties": { "x": {"type": "number", "description": "X coordinate in meters"}, "y": {"type": "number"}, "z": {"type": "number"}, "speed": {"type": "number", "description": "Speed 0.0-1.0"} }, "required": ["x", "y", "z"] } } }, {"type": "function", "function": {"name": "open_gripper", "description": "Open gripper", "parameters": {}}}, {"type": "function", "function": {"name": "close_gripper", "description": "Close gripper to grasp", "parameters": {}}}, {"type": "function", "function": {"name": "detect_object", "description": "Find object position by name", "parameters": {"type": "object", "properties": {"object_name": {"type": "string"}}}}} ] class LLMRobotPlanner: def __init__(self, robot_executor): self.client = openai.OpenAI() self.robot = robot_executor self.system_prompt = """ You are a robot arm controller. When given a task in natural language, decompose it into primitive robot actions and execute them step-by-step. Always detect object position before attempting to grasp. Current robot workspace: 0.3m to 0.8m in front, ±0.4m left/right. """ def execute_task(self, task_description): messages = [ {"role": "system", "content": self.system_prompt}, {"role": "user", "content": task_description} ] while True: response = self.client.chat.completions.create( model="gpt-4o", messages=messages, tools=ROBOT_TOOLS, tool_choice="auto" ) msg = response.choices[0].message if not msg.tool_calls: print(f"Task complete: {msg.content}") break # Eksekusi setiap tool call di robot fisik for tc in msg.tool_calls: result = self.dispatch(tc.function.name, json.loads(tc.function.arguments)) messages.append({ "role": "tool", "tool_call_id": tc.id, "content": json.dumps(result) })
SLAM
SLAM (Simultaneous Localization and Mapping) memecahkan masalah ayam-dan-telur: robot perlu peta untuk navigasi, tapi perlu mengetahui posisinya untuk membuat peta. SLAM mengestimasi keduanya secara bersamaan.
Formulasi Probabilistik SLAM
EKF-SLAM (Extended Kalman Filter)
Asumsi: gerak dan observasi linear Gaussian. State vektor = [pose robot + posisi semua landmark].
Graph-based SLAM (Backend Modern)
Formulasikan SLAM sebagai graph optimization problem. Node = pose robot, edge = constraint dari odometri atau loop closure.
Visual SLAM — ORB-SLAM3
# Integrasi ORB-SLAM3 dengan Python via ROS2 import rclpy from rclpy.node import Node from sensor_msgs.msg import Image, CameraInfo from nav_msgs.msg import Odometry, OccupancyGrid from geometry_msgs.msg import PoseStamped import cv2, numpy as np from cv_bridge import CvBridge class SLAMNode(Node): def __init__(self): super().__init__('slam_node') self.bridge = CvBridge() # Subscribe ke kamera self.create_subscription(Image, '/camera/image_raw', self.image_callback, 10) self.create_subscription(CameraInfo, '/camera/camera_info', self.info_callback, 1) # Publish pose estimasi self.pose_pub = self.create_publisher(PoseStamped, '/slam/pose', 10) # ORB feature detector self.orb = cv2.ORB_create(nfeatures=1000) self.matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) self.prev_frame = None self.prev_kp = None self.prev_desc = None self.camera_matrix = None def image_callback(self, msg): frame = self.bridge.imgmsg_to_cv2(msg, 'bgr8') gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) kp, desc = self.orb.detectAndCompute(gray, None) if self.prev_desc is not None and desc is not None: matches = self.matcher.match(self.prev_desc, desc) matches = sorted(matches, key=lambda x: x.distance)[:50] if len(matches) >= 8: pts1 = np.float32([self.prev_kp[m.queryIdx].pt for m in matches]) pts2 = np.float32([kp[m.trainIdx].pt for m in matches]) # Essential matrix → recover pose R, t E, mask = cv2.findEssentialMat(pts1, pts2, self.camera_matrix, cv2.RANSAC, 0.999, 1.0) _, R, t, _ = cv2.recoverPose(E, pts1, pts2, self.camera_matrix) self.publish_pose(R, t, msg.header.stamp) self.prev_frame = gray self.prev_kp = kp self.prev_desc = desc
LiDAR SLAM — Cartographer & LOAM
| Algoritma | Backend | Sensor | Keunggulan |
|---|---|---|---|
| Cartographer | Graph SLAM (Ceres) | 2D/3D LiDAR | Google, produksi-grade, ROS2 support |
| LOAM | Iterative closest point | 3D LiDAR (Velodyne) | Real-time, edge+planar feature matching |
| LIO-SAM | Factor graph (GTSAM) | LiDAR + IMU | Tightly-coupled, loop closure robust |
| KISS-ICP | ICP + adaptive threshold | Any LiDAR | Simpel, robust, 2023 ICRA |
| RTAB-Map | Bag-of-words + graph | RGB-D / LiDAR | Long-term mapping, online loop closure |
INVERSE
KINEMATICS
Inverse Kinematics (IK) menjawab pertanyaan: "Berapa sudut setiap joint agar end-effector mencapai pose target?" — kebalikan dari Forward Kinematics yang menghitung pose dari sudut joint.
Forward Kinematics — Fondasi IK
Denavit-Hartenberg (DH) Parameters
DH convention mendefinisikan 4 parameter per joint untuk mendeskripsikan transformasi antar link:
Matriks Transformasi DH
Metode Analytical IK (Closed-form)
Untuk robot 6-DOF dengan konfigurasi Pieper (tiga sumbu terakhir bertemu di satu titik), solusi analitik tertutup ada. Contoh robot UR5:
Metode Numerical — Jacobian Inverse
Untuk konfigurasi robot arbitrer atau redundant, gunakan pendekatan iteratif berbasis Jacobian:
Implementasi Python IK — 6DOF Robot
import numpy as np class RobotArm6DOF: """ 6-DOF robot arm IK solver. DH parameters: [a, alpha, d, theta_offset] """ def __init__(self, dh_params): # dh_params: list of [a, alpha, d, theta_offset] per joint self.dh = np.array(dh_params) def dh_matrix(self, a, alpha, d, theta): """Matriks transformasi DH 4x4""" ct, st = np.cos(theta), np.sin(theta) ca, sa = np.cos(alpha), np.sin(alpha) return np.array([ [ct, -st*ca, st*sa, a*ct], [st, ct*ca, -ct*sa, a*st], [ 0, sa, ca, d], [ 0, 0, 0, 1] ]) def forward_kinematics(self, q): """FK: hitung pose end-effector dari joint angles q""" T = np.eye(4) for i, (a, alpha, d, theta_off) in enumerate(self.dh): Ti = self.dh_matrix(a, alpha, d, q[i] + theta_off) T = T @ Ti return T # 4x4 homogeneous transform def jacobian(self, q, eps=1e-6): """Numerical Jacobian via finite differences""" T0 = self.forward_kinematics(q) p0 = T0[:3, 3] n = len(q) J = np.zeros((6, n)) for i in range(n): q_eps = q.copy() q_eps[i] += eps Ti = self.forward_kinematics(q_eps) pi = Ti[:3, 3] # Linear velocity part J[:3, i] = (pi - p0) / eps # Angular velocity part (from rotation matrix change) dR = (Ti[:3,:3] - T0[:3,:3]) / eps # Extract angular velocity (skew-symmetric part) J[3, i] = dR[2, 1] J[4, i] = dR[0, 2] J[5, i] = dR[1, 0] return J def ik_dls(self, target_pos, target_R=None, q_init=None, max_iter=200, tol=1e-4, lam=0.01): """ Damped Least Squares IK solver. target_pos: [x, y, z] target position target_R: 3x3 target rotation (optional) Returns: q (joint angles), converged (bool) """ q = q_init if q_init is not None else np.zeros(len(self.dh)) for iteration in range(max_iter): T = self.forward_kinematics(q) pos_err = target_pos - T[:3, 3] if target_R is not None: R_err = target_R @ T[:3,:3].T angle = np.arccos(np.clip((np.trace(R_err) - 1) / 2, -1, 1)) if angle > 1e-8: ax = np.array([R_err[2,1]-R_err[1,2], R_err[0,2]-R_err[2,0], R_err[1,0]-R_err[0,1]]) / (2 * np.sin(angle)) rot_err = angle * ax else: rot_err = np.zeros(3) dx = np.concatenate([pos_err, rot_err]) else: dx = pos_err if np.linalg.norm(pos_err) < tol: return q, True J = self.jacobian(q) if target_R is None: J = J[:3] # position only # DLS: Δq = Jᵀ(JJᵀ + λ²I)⁻¹ Δx JJt = J @ J.T dq = J.T @ np.linalg.solve(JJt + lam**2 * np.eye(J.shape[0]), dx) q = q + dq # Joint limits clipping (contoh ±180°) q = np.clip(q, -np.pi, np.pi) return q, False # Contoh: UR5 DH parameters [a, alpha, d, theta_offset] ur5_dh = [ [0, np.pi/2, 0.089, 0], [-0.425, 0, 0, 0], [-0.392, 0, 0, 0], [0, np.pi/2, 0.109, 0], [0, -np.pi/2, 0.095, 0], [0, 0, 0.082, 0], ] robot = RobotArm6DOF(ur5_dh) target = np.array([0.5, 0.2, 0.3]) q_sol, ok = robot.ik_dls(target) print(f"IK {'converged' if ok else 'failed'}: {np.degrees(q_sol).round(2)}°")
Metode Modern: FABRIK & Neural IK
| Metode | Kompleksitas | Singularity | Use Case |
|---|---|---|---|
| Analytical (closed-form) | O(1) | Perlu penanganan manual | Robot standar 6-DOF, real-time kontrol |
| Jacobian Transpose | O(n²) | Bisa terjebak | Sederhana, cukup untuk banyak kasus |
| Damped Least Squares | O(n³) | Robust via damping | General, produksi-grade |
| FABRIK | O(n) | N/A (geometric) | Animasi, karakter, chain panjang |
| Neural IK (learned) | O(1) inference | Dilearned | Non-standard kinematics, deformable |
| CCD (Cyclic Coord Descent) | O(n) | N/A | Redundant arm, obstacle avoidance |
ELECTRONICS
FOR ROBOTICS
Microcontrollers & Compute Platforms
| Platform | CPU/MCU | RAM | GPIO | Penggunaan |
|---|---|---|---|---|
| Arduino Mega | ATmega2560, 16MHz | 8KB | 54 | Motor control, sensor fusion dasar |
| ESP32-S3 | Xtensa LX7 dual, 240MHz | 512KB+8MB | 36 | WiFi/BT robot, edge AI (TFLite) |
| STM32F4 | Cortex-M4, 168MHz | 192KB | 50+ | Real-time motor control, FOC |
| Raspberry Pi 5 | Cortex-A76, 2.4GHz | 4–8GB | 40 | ROS2 host, vision processing |
| NVIDIA Jetson Nano | Cortex-A57 + 128-core GPU | 4GB | 40 | Deep learning inference di robot |
| NVIDIA Jetson Orin NX | 12-core + 1024 CUDA | 16GB | 40 | Real-time vision + SLAM + DNN |
| BeagleBone AI-64 | Cortex-A72 + DSP + MMA | 4GB | 80+ | Industrial robot, PRU real-time |
Aktuator — Motor dan Driver
Jenis Motor
FOC (Field Oriented Control) — Motor BLDC Modern
Sensor Suite untuk Robot
| Sensor | Output | Interface | Kegunaan |
|---|---|---|---|
| IMU (MPU-6050, ICM-42688) | 6/9-axis accel+gyro+mag | I²C/SPI | Orientasi, odometri, stabilisasi |
| Encoder Quadrature | Pulsa A/B, resolusi 1000–4096 PPR | GPIO interrupt | Posisi dan kecepatan motor |
| LiDAR 2D (RPLiDAR A3) | 360° scan, 0.15–25m | UART/USB | SLAM 2D, obstacle avoidance |
| LiDAR 3D (Livox Mid-360) | Point cloud, 40m range | Ethernet | SLAM 3D, object detection |
| RGB-D (Intel RealSense) | Color + depth 1280×720 | USB3 | Visual SLAM, grasp planning |
| Force/Torque (ATI Nano17) | 6-axis F/T, ±70N | SPI/analog | Kontrol gaya, deteksi kontak |
| TOF Array (VL53L5CX) | 8×8 depth matrix | I²C | Proximity, gesture, obstacle |
Power Management untuk Robot
/* Power distribution architecture untuk mobile robot */ /* Battery: 4S LiPo (14.8V nominal, 16.8V full, 12V cutoff) */ /* atau 6S untuk robot yang lebih besar */ /* BATT (14.8V) ──┬──> FUSE (30A) ──> Motor Driver (BLDC/DC Motor) │ ├──> Voltage Regulator 12V ──> Servo Bus │ (LM2596 Buck, ≥5A) │ ├──> Voltage Regulator 5V ──> Raspberry Pi │ (MP2315 Buck, 3A) GPIO Sensors │ └──> Voltage Regulator 3.3V ──> STM32 / Sensors (AMS1117 atau LDO) Battery Monitor: INA226 (I2C) untuk current sensing Balancer: BMS integrated untuk proteksi overcharge/discharge */ // Arduino power monitoring #include <Wire.h> #include <Adafruit_INA226.h> Adafruit_INA226 ina226; void setup() { Wire.begin(); ina226.begin(); ina226.setShunt(0.01, 20.0); // 10mΩ shunt, 20A max ina226.setAveragingCount(INA226_COUNT_64); } void loop() { float voltage = ina226.readBusVoltage(); // Volt float current = ina226.readShuntCurrent(); // Ampere float power = ina226.readPower(); // Watt if (voltage < 12.0) { emergency_shutdown(); // LiPo protection } }
Bus Komunikasi untuk Robot
| Protocol | Speed | Topologi | Penggunaan Robotika |
|---|---|---|---|
| CAN Bus / CAN FD | 1 Mbps / 8 Mbps | Multi-drop bus | Motor controller, industri (robot arm komersial) |
| EtherCAT | 100 Mbps | Ring/daisy chain | Real-time motion control, servo industri |
| RS-485 (Modbus/Dynamixel) | 3 Mbps | Multi-drop | Smart servo Dynamixel, sensor industri |
| SPI | 40+ MHz | Point-to-point | IMU, encoder, sensor kecepatan tinggi |
| I²C | 400 KHz / 3.4 MHz | Multi-master/slave | Sensor I²C (ToF, IMU, pressure) |
| UART/USART | 115200–3M bps | Point-to-point | GPS, LiDAR, debug, ROS serial bridge |
MECHANICAL
ENGINEERING
Analisis Statis Link Robot
Beam Theory — Defleksi dan Stress
Material untuk Struktur Robot
| Material | Density (g/cm³) | Yield (MPa) | E (GPa) | Kelebihan |
|---|---|---|---|---|
| Aluminium 6061-T6 | 2.7 | 276 | 68.9 | Ringan, machineable, umum |
| Aluminium 7075-T6 | 2.81 | 503 | 71.7 | Lebih kuat, aerospace grade |
| Steel 4140 | 7.85 | 655 | 200 | Sangat kuat, murah, berat |
| Titanium Grade 5 | 4.43 | 880 | 113.8 | Kuat+ringan, biocompatible |
| Carbon Fiber (CFRP) | 1.6 | 600+ | 70–200 | Sangat ringan, anisotropik |
| PETG (3D print) | 1.27 | 50 | 2.1 | Prototip cepat, tidak brittle |
| Nylon PA12 | 1.01 | 48 | 1.7 | Tahan benturan, fleksibel |
Gear Train dan Reduksi Kecepatan
Desain Joint Robot
3D Printing untuk Prototip Robot
| Teknologi | Material | Toleransi | Penggunaan Robotika |
|---|---|---|---|
| FDM (Bambu Lab, Prusa) | PLA/PETG/ABS/Nylon/CF | ±0.2 mm | Housing, bracket, prototip cepat |
| SLA/MSLA (Elegoo) | Resin standar/ABS-like | ±0.05 mm | Part halus, gear kecil, gripper fingertip |
| SLS (industrial) | Nylon PA12/PA11 | ±0.1 mm | Structural part isotropik, no support |
| MJF (HP) | Nylon PA12 | ±0.08 mm | Produksi batch kecil, end-use parts |
| Metal SLM | Ti64, AlSi10Mg, 316L | ±0.05 mm | Joint housing, flanges, high-stress parts |
PYTHON
PROGRAMMING
ROS2 dengan Python — Fondasi
#!/usr/bin/env python3 """ROS2 Node template untuk robot controller""" import rclpy from rclpy.node import Node from rclpy.action import ActionServer, ActionClient from rclpy.callback_groups import ReentrantCallbackGroup from std_msgs.msg import Float64MultiArray from sensor_msgs.msg import JointState from geometry_msgs.msg import PoseStamped from trajectory_msgs.msg import JointTrajectory, JointTrajectoryPoint import numpy as np from builtin_interfaces.msg import Duration class RobotController(Node): def __init__(self): super().__init__('robot_controller') # Callback group untuk concurrent callbacks self.cb_group = ReentrantCallbackGroup() # Publishers self.joint_cmd_pub = self.create_publisher( JointTrajectory, '/joint_trajectory_controller/joint_trajectory', 10 ) # Subscribers self.joint_state_sub = self.create_subscription( JointState, '/joint_states', self.joint_state_callback, 10, callback_group=self.cb_group ) # Service client untuk IK (MoveIt2) from moveit_msgs.srv import GetPositionIK self.ik_client = self.create_client( GetPositionIK, '/compute_ik' ) self.current_joints = None self.get_logger().info('Robot Controller initialized') def joint_state_callback(self, msg): self.current_joints = { name: pos for name, pos in zip(msg.name, msg.position) } async def move_to_pose(self, target_pose: PoseStamped): """Kirim target pose, dapatkan IK, eksekusi trajectory""" from moveit_msgs.srv import GetPositionIK from moveit_msgs.msg import PositionIKRequest req = GetPositionIK.Request() req.ik_request = PositionIKRequest() req.ik_request.group_name = 'arm' req.ik_request.pose_stamped = target_pose req.ik_request.timeout.sec = 2 req.ik_request.attempts = 10 future = self.ik_client.call_async(req) await future if future.result().error_code.val == 1: # SUCCESS positions = future.result().solution.joint_state.position await self.execute_joint_trajectory(positions) else: self.get_logger().error('IK failed') async def execute_joint_trajectory(self, target_positions, duration_sec=2.0): traj = JointTrajectory() traj.joint_names = ['joint_1', 'joint_2', 'joint_3', 'joint_4', 'joint_5', 'joint_6'] point = JointTrajectoryPoint() point.positions = list(target_positions) point.time_from_start = Duration(sec=int(duration_sec)) traj.points = [point] self.joint_cmd_pub.publish(traj) self.get_logger().info(f'Executing trajectory to {target_positions}') def main(): rclpy.init() node = RobotController() executor = rclpy.executors.MultiThreadedExecutor() executor.add_node(node) executor.spin() if __name__ == '__main__': main()
PyBullet Simulation — Physics-Based Robot Sim
import pybullet as pb import pybullet_data import numpy as np import time class RobotSimulation: def __init__(self, urdf_path, gui=True): self.client = pb.connect(pb.GUI if gui else pb.DIRECT) pb.setAdditionalSearchPath(pybullet_data.getDataPath()) pb.setGravity(0, 0, -9.81) # Load ground plane dan robot self.plane = pb.loadURDF('plane.urdf') self.robot = pb.loadURDF(urdf_path, basePosition=[0, 0, 0], useFixedBase=True) # Identifikasi joint indices self.num_joints = pb.getNumJoints(self.robot) self.joint_ids = [] for i in range(self.num_joints): info = pb.getJointInfo(self.robot, i) if info[2] == pb.JOINT_REVOLUTE: self.joint_ids.append(i) # Simulation timestep pb.setTimeStep(1/240) def set_joint_positions(self, q): for jid, angle in zip(self.joint_ids, q): pb.setJointMotorControl2( self.robot, jid, controlMode=pb.POSITION_CONTROL, targetPosition=angle, force=300, # max force (N) maxVelocity=2.0 # rad/s ) def ik_pybullet(self, ee_link_id, target_pos, target_orn=None): """Gunakan IK solver built-in PyBullet""" if target_orn is not None: q = pb.calculateInverseKinematics( self.robot, ee_link_id, target_pos, target_orn, maxNumIterations=100, residualThreshold=1e-5 ) else: q = pb.calculateInverseKinematics( self.robot, ee_link_id, target_pos, maxNumIterations=100 ) return np.array(q) def step(self, render=True): pb.stepSimulation() if render: time.sleep(1/240) def get_ee_pose(self, ee_link_id): state = pb.getLinkState(self.robot, ee_link_id) pos = np.array(state[4]) # world position orn = np.array(state[5]) # world orientation (quaternion) return pos, orn # Contoh penggunaan sim = RobotSimulation('kuka_iiwa/model.urdf') target = [0.5, 0.0, 0.6] q_sol = sim.ik_pybullet(6, target) sim.set_joint_positions(q_sol) for _ in range(500): sim.step()
Key Libraries Ekosistem Python Robotika
| Library | Fungsi | Install |
|---|---|---|
| rclpy | ROS2 Python client | apt install ros-humble-rclpy |
| numpy / scipy | Komputasi numerik, transformasi | pip install numpy scipy |
| roboticstoolbox-python | DH, FK/IK, trajectory, simulator | pip install roboticstoolbox-python |
| spatialmath-python | SO3, SE3, quaternion | pip install spatialmath-python |
| OpenCV (cv2) | Computer vision, kamera | pip install opencv-python |
| PyBullet | Physics simulation | pip install pybullet |
| MuJoCo (mujoco-py) | High-fidelity physics sim | pip install mujoco |
| PyTorch / TF | Deep learning | pip install torch |
| stable-baselines3 | RL algorithms (PPO, SAC, TD3) | pip install stable-baselines3 |
| ikpy | Pure Python IK library | pip install ikpy |
| open3d | Point cloud processing | pip install open3d |
| pyserial | Serial communication (Arduino) | pip install pyserial |
OFFENSIVE
SECURITY
Attack Surface Robot Modern
ROS2 Security Assessment
ROS2 Threat Model
ROS2 menggunakan DDS (Data Distribution Service) sebagai middleware. Secara default, DDS tidak terenkripsi dan tidak ada autentikasi antara node.
| Vektor Serangan | Deskripsi | Dampak | Mitigation |
|---|---|---|---|
| Topic Spoofing | Node jahat publish ke /cmd_vel atau /joint_commands | Kendali penuh robot | SROS2, DDS security plugins |
| Topic Eavesdropping | Subscribe ke /camera/image_raw, /map, sensor data | Exfiltrasi data sensor | Enkripsi DDS (TLS) |
| Node Injection | Jalankan ROS2 node berbahaya di dalam jaringan | MitM, replay attack | Namespace access control |
| Parameter Server Attack | Modifikasi parameter via /robot/set_parameters | Perubahan konfigurasi | Parameter whitelist, auth |
| Service Call Abuse | Panggil /emergency_stop, /reset, /load_controller | DoS, unsafe state | Service ACL |
# ROS2 Security Assessment Tools # ================================ # 1. Enumerasi semua topic aktif di jaringan ROS2 $ ros2 topic list -v $ ros2 topic echo /cmd_vel # intercept velocity command $ ros2 topic echo /joint_states # sniff joint positions # 2. Enumerasi service dan parameter $ ros2 service list $ ros2 param list $ ros2 param get /robot_driver motor_kp # 3. Node graph untuk attack surface mapping $ ros2 node list $ rqt_graph # visual pub/sub graph # 4. Publish berbahaya ke /cmd_vel (dengan izin pentest) $ ros2 topic pub /cmd_vel geometry_msgs/msg/Twist \ "{linear: {x: 0.5}, angular: {z: 0.3}}" # 5. SROS2 — Mengaktifkan security $ ros2 security create_keystore ~/sros2_keystore $ ros2 security create_key ~/sros2_keystore /talker $ ros2 security create_permission ~/sros2_keystore /talker \ ~/sros2_keystore/enclaves/talker/policy.xml $ export ROS_SECURITY_KEYSTORE=~/sros2_keystore $ export ROS_SECURITY_ENABLE=true $ export ROS_SECURITY_STRATEGY=Enforce
Firmware Analysis — Robot MCU
# Binary analysis firmware STM32 / ESP32 # Asumsi: dapat ELF/BIN via JTAG extraction atau OTA dump # 1. Static analysis dengan Ghidra / radare2 $ r2 -A firmware.bin [0x00000000]> afl # list semua fungsi [0x00000000]> pdf @sym.uart_receive_cmd # disassemble command handler [0x00000000]> axt @str.motor_speed # cross reference string # 2. Cari hardcoded credentials / keys $ strings firmware.bin | grep -i -E "pass|key|secret|token|wifi|ssid" $ binwalk -e firmware.bin # extract embedded filesystems # 3. UART debug port sniffing # Probe TX/RX pins dengan logic analyzer / FTDI $ screen /dev/ttyUSB0 115200 # atau picocom, minicom # 4. JTAG/SWD access untuk memory dump (OpenOCD) $ openocd -f interface/stlink.cfg -f target/stm32f4x.cfg (gdb) target extended-remote :3333 (gdb) monitor reset halt (gdb) dump binary memory dump.bin 0x08000000 0x080FFFFF # flash dump (gdb) dump binary memory sram.bin 0x20000000 0x2001FFFF # SRAM dump
CAN Bus Attack — Industri & Automotive Robot
# CAN Bus attack dengan SocketCAN (Linux) # Setup CAN interface (hardware: CAN analyzer USB) $ ip link set can0 type can bitrate 1000000 $ ip link set up can0 # Monitoring traffic $ candump can0 # dump semua frame $ cansniffer can0 # real-time monitoring dengan diff highlight # Injeksi CAN frame (perlu otorisasi pentest) $ cansend can0 123#DEADBEEF01020304 # ID=0x123, data=8 bytes # Python CAN injection import can bus = can.Bus(interface='socketcan', channel='can0') # Sniff 100 frame for i in range(100): msg = bus.recv(timeout=1.0) if msg: print(f"ID: 0x{msg.arbitration_id:03X} Data: {msg.data.hex()}") # Fuzzing frame ID tertentu import random for _ in range(1000): fuzz_data = bytes([random.randint(0, 255) for _ in range(8)]) msg = can.Message(arbitration_id=0x200, data=fuzz_data) bus.send(msg)
Adversarial Attack pada Robot Perception
# FGSM (Fast Gradient Sign Method) — adversarial example import torch import torch.nn.functional as F def fgsm_attack(model, image, true_label, epsilon=0.03): """ Generate adversarial image yang visually mirip asli tapi menyebabkan misclassification. Misal: robot stop sign → speed limit sign detection confusion. """ image.requires_grad = True output = model(image) loss = F.cross_entropy(output, true_label) model.zero_grad() loss.backward() # Perturbasi dalam arah gradient sign_grad = image.grad.data.sign() adv_image = image + epsilon * sign_grad adv_image = torch.clamp(adv_image, 0, 1) # valid pixel range return adv_image # Physical adversarial: patch pada objek nyata # Adversarial sticker yang dicetak dan ditempel pada objek # menyebabkan robot vision model salah klasifikasi # → bahaya untuk safety-critical systems
Robot Security Hardening Checklist
MANUFACTURING
ROBOT PRODUK
Stage 1 — Desain untuk Manufaktur (DFM)
Design for Manufacturing Principles
Bill of Materials (BOM) Management
# BOM generator Python sederhana import pandas as pd import json class RobotBOM: def __init__(self, product_name): self.product = product_name self.items = [] def add_item(self, pn, name, category, qty, unit_cost_usd, supplier, lead_time_days, notes=""): self.items.append({ 'PN': pn, 'Name': name, 'Category': category, 'Qty': qty, 'Unit Cost': unit_cost_usd, 'Total Cost': qty * unit_cost_usd, 'Supplier': supplier, 'Lead Time': lead_time_days, 'Notes': notes }) def generate_report(self): df = pd.DataFrame(self.items) total = df['Total Cost'].sum() print(f"\n{'='*60}") print(f"BOM: {self.product}") print(f"Total Parts: {len(self.items)}") print(f"Total COGS: ${total:.2f}") print(df.groupby('Category')['Total Cost'].sum().sort_values(ascending=False)) return df, total # Contoh BOM robot arm 6-DOF bom = RobotBOM("BlueArm-6 v1.0") bom.add_item("MOT-001", "Dynamixel XH540-W270", "Actuator", 6, 299, "Robotis", 14) bom.add_item("MCU-001", "STM32F446RE Nucleo", "Electronics", 1, 22, "ST/Mouser", 7) bom.add_item("CPT-001", "Jetson Orin NX 16GB", "Compute", 1, 499, "NVIDIA/Arrow", 21) bom.add_item("SEN-001", "Intel RealSense D435i", "Sensor", 1, 179, "Intel", 7) bom.add_item("STR-001", "Al 6061 machined links", "Structure", 6, 45, "Local CNC", 10) bom.add_item("PCB-001", "Main controller PCB", "PCB", 1, 85, "JLCPCB", 14) df, total_cogs = bom.generate_report() # Pricing: COGS × 3–5 untuk target price
PCB Design dan Manufacturing
| Layer | Fungsi | Material |
|---|---|---|
| Top Copper | Component placement, signal routing | 35μm Cu foil |
| GND Plane | Ground return, EMI shielding | 35μm Cu full pour |
| Power Plane | Power distribution (+5V, +3.3V) | 35μm Cu full pour |
| Bottom Copper | Signal routing, test points | 35μm Cu foil |
| Substrate | Dielektrik isolasi | FR4 1.6mm, Tg170 |
PCB Design Rules untuk Robot Controller
Clearance: 0.2mm min untuk signal, 0.5mm untuk high voltage.
Via: 0.8mm drill, 1.2mm pad untuk standard.
Impedance: 50Ω untuk high-speed diff pair (USB, Ethernet).
Decoupling: 100nF ceramic per IC power pin, 10μF bulk per section.
PCB Fab Service Comparison
| Fabrikasi | Min Order | Lead Time | Kualitas | Cocok untuk |
|---|---|---|---|---|
| JLCPCB | 5 pcs, ~$2 | 3–7 hari | Good | Prototip, skala kecil |
| PCBWay | 5 pcs, ~$5 | 5–10 hari | Very Good | Prototip+produksi kecil |
| LCSC/JLCPCB+PCBA | Turnkey assembly | 10–15 hari | Good | SMT assembly sekaligus |
| Seeed Fusion | 5 pcs, ~$4.9 | 7–14 hari | Good | IoT/robot maker |
| TTM/Sanmina | High MOQ | 2–4 minggu | Excellent | Produksi massal industri |
Testing dan Quality Assurance
#!/usr/bin/env python3 """ Automated robot bring-up test sequence. Jalankan setiap unit sebelum pengiriman ke customer. """ import time, logging from dataclasses import dataclass from typing import List, Dict logging.basicConfig(level=logging.INFO, format='%(asctime)s [%(levelname)s] %(message)s') @dataclass class TestResult: test_name: str passed: bool value: float unit: str limit_min: float limit_max: float message: str = "" class RobotQATest: def __init__(self, robot_interface, serial_number): self.robot = robot_interface self.sn = serial_number self.results: List[TestResult] = [] def run_all(self): tests = [ self.test_power_supply, self.test_motor_communication, self.test_imu_calibration, self.test_joint_range_of_motion, self.test_encoder_accuracy, self.test_emergency_stop, self.test_payload_capacity, self.test_repeatability, ] for test in tests: try: result = test() self.results.append(result) status = "✓ PASS" if result.passed else "✗ FAIL" logging.info(f"{status} | {result.test_name}: " f"{result.value:.3f} {result.unit}") except Exception as e: logging.error(f"TEST ERROR {test.__name__}: {e}") return self.generate_certificate() def test_repeatability(self): """ISO 9283: positional repeatability test""" target = [0.5, 0.0, 0.4] positions = [] for _ in range(30): # 30 repetisi ISO standard self.robot.move_home() time.sleep(0.5) self.robot.move_to(target) time.sleep(0.5) actual = self.robot.get_ee_position() positions.append(actual) import numpy as np pos_arr = np.array(positions) mean_pos = pos_arr.mean(axis=0) errors = np.linalg.norm(pos_arr - mean_pos, axis=1) rp = 2 * errors.std() * 1000 # RP dalam mm (2σ) return TestResult( "Positional Repeatability (ISO 9283)", rp <= 0.1, rp, "mm", 0, 0.1, f"RP = {rp:.3f} mm (limit: ±0.1mm)" )
Certification dan Compliance
| Sertifikasi | Region | Scope | Mandatory |
|---|---|---|---|
| CE Marking | EU/EEA | EMC, Safety (Machinery Directive 2006/42/EC) | Ya, untuk jual di EU |
| FCC Part 15 | USA | Emisi RF, interferensi | Ya, untuk produk wireless US |
| SRRC / CCC | China | RF approval, safety | Ya, untuk pasar China |
| POSTEL / SDPPI | Indonesia | Perangkat telekomunikasi | Ya, untuk impor/jual di ID |
| ISO 10218 | International | Safety industrial robots | Best practice |
| ISO/TS 15066 | International | Collaborative robots (cobots) | Best practice |
| IEC 61508 | International | Functional Safety (SIL) | Aplikasi safety-critical |
| RoHS / REACH | EU | Hazardous substances | Ya, untuk pasar EU |
Supply Chain Strategy
Checklist Sebelum Launch Produk Robot
| Area | Item | Status |
|---|---|---|
| Hardware | Design freeze, gerber files final | ☐ |
| Hardware | DFM review dengan fabrikator | ☐ |
| Hardware | EVT/DVT/PVT build completed | ☐ |
| Software | Firmware release signed + versioned | ☐ |
| Software | OTA update mechanism tested | ☐ |
| Safety | E-stop hardware dan software tested | ☐ |
| Safety | Workspace limit software + hardware | ☐ |
| QA | 100% automated test setiap unit | ☐ |
| Compliance | CE / FCC / lokal certification | ☐ |
| Docs | User manual, API docs, safety manual | ☐ |
| Support | Spare part availability 5 tahun | ☐ |
Blue Dragon Security Research