Isaac Lab Tasks
This guide explains how to make Isaac Lab training or play scripts discover the
task definitions shipped in lav2.tasks.isaaclab.
Typical Workflow
- Move to the Isaac Lab reinforcement learning scripts directory, such as
scripts/reinforcement_learning/. - Ensure the script imports
lav2.tasks.isaaclabin addition toisaaclab_tasks. - Run the script with a LAV2 task name such as
--task Isaac-LAV2-Base-Direct-v0.
Batch Import Patch
On Linux or macOS, add the import below every import isaaclab_tasks line:
sed -i '/import isaaclab_tasks/a\
import lav2.tasks.isaaclab' $(find scripts/reinforcement_learning/ -name '*.py')
On Windows, use PowerShell:
Get-ChildItem -Path scripts/reinforcement_learning/ -Filter *.py -Recurse | ForEach-Object {
(Get-Content $_.FullName) -replace 'import isaaclab_tasks', "import isaaclab_tasks`nimport lav2.tasks.isaaclab" | Set-Content $_.FullName
}
Alternative: Symlink Method (Recommended)
If you want Isaac Lab to see the LAV2 task package without patching individual scripts, create a symlink into the Isaac Lab task package tree.
On Linux or macOS:
ln -s /path/to/lav2/tasks/isaaclab source/isaaclab_tasks/isaaclab_tasks/lav2_tasks
On Windows:
New-Item -ItemType SymbolicLink -Path source\isaaclab_tasks\isaaclab_tasks\lav2_tasks -Target C:\path\to\lav2\tasks\isaaclab
Notes
- Use the import-based method when you want the change to stay explicit in the script.
- Use the symlink-based method when you want to avoid touching multiple RL entrypoint files.
Training Workflows
There are two primary ways to train LAV2 Isaac Lab tasks.
1. Use Isaac Lab's Built-In Training Scripts
This is the standard path when you want to stay close to Isaac Lab's native RL entrypoints and CLI interface.
Example with skrl:
./isaaclab.sh -p scripts/reinforcement_learning/skrl/train.py --task Isaac-LAV2-Base-v0 --num_envs 4096 --max_iterations 500 --headless
To inspect the available CLI arguments and their meaning:
./isaaclab.sh -p scripts/reinforcement_learning/skrl/train.py --help
2. Use LAV2's Custom skrl Entrypoints
This path is specific to LAV2 and currently targets skrl only. Use it when
you want the repository's custom network architecture and training wiring under
lav2.runner.skrl.cfg.*.
Example entrypoint:
uv run python -m lav2.runner.skrl.cfg.LAV2_base
Use the built-in Isaac Lab scripts when you want compatibility with Isaac Lab's
standard workflows. Use the LAV2 skrl entrypoints when you specifically need
the custom network and config organization provided by this repository.
Modifying Training Parameters
For meaningful task tuning, the primary method is to edit the configuration files directly instead of relying only on command-line overrides.
In LAV2's Isaac Lab tasks, the main entrypoint is the environment config file, for example:
lav2/tasks/isaaclab/LAV2_base/LAV2_env_cfg.pylav2/tasks/isaaclab/LAV2_base/LAV2_vel_env_cfg.pylav2/tasks/isaaclab/LAV2_trajectory/LAV2_traj_env_cfg.py
This follows Isaac Lab's manager-based environment pattern, where the task is assembled from config blocks such as scene, actions, observations, events, commands, rewards, and terminations.
What To Modify In The Env Config
When you want to change task behavior, usually start from the environment cfg rather than the training script.
Common places to edit:
scene: number of environments, terrain or asset layout, spacing, and scene compositionactions: action scaling, action type, command mapping, and clippingobservations: which signals are exposed to the policy, grouping, noise, and concatenation behaviorcommands: command ranges, resampling periods, and target generation logicevents: reset randomization, perturbations, startup randomization, and domain randomization hooksrewards: reward terms, weights, thresholds, and shaping structureterminations: timeout, crash, drift, or invalid-state termination logic
This is the same structure Isaac Lab uses in its manager-based RL environment tutorial, where each part of the task is broken into dedicated config classes and then assembled into a single environment config.
Common Examples
Typical edits in practice include:
- Increasing or decreasing the number of parallel environments
- Changing reward weights to emphasize tracking, smoothness, or energy use
- Expanding or narrowing command ranges
- Adjusting reset randomization and perturbation strength
- Changing observation composition before changing the network architecture
If a parameter affects task semantics, place it in the environment config. If it affects optimizer or policy training behavior, place it in the agent config.
Modifying Agent And Runner Config
Training-specific parameters usually belong in the agent config rather than the environment config.
For Isaac Lab built-in workflows, these values usually live in the corresponding
agent config files under the task package, such as the YAML or Python files in
lav2/tasks/isaaclab/.../agents/.
For LAV2's custom skrl path, the main entrypoints live under
lav2.runner.skrl.cfg.*. Use those modules when you need to modify:
- policy or value network architecture
- hidden dimensions and activation choices
- PPO or other learner hyperparameters
- experiment naming and logging defaults
- checkpoint or resume behavior wired by the custom runner
Example custom entrypoint:
uv run python -m lav2.runner.skrl.cfg.LAV2_base
A Practical Editing Strategy
Use this order when tuning:
- Confirm the task definition and env config are the ones actually being used.
- Change task-level parameters in the environment config first.
- Change optimizer or policy parameters in the agent or runner config second.
- Only then use command-line overrides for quick experiments.
This keeps the repository's source of truth readable and avoids hiding important task behavior inside one-off shell commands.
Hydra CLI Overrides
Hydra CLI overrides are still useful for temporary experiments, but they should be treated as a fast iteration tool, not the main place to maintain a task.
Example:
./isaaclab.sh -p scripts/reinforcement_learning/skrl/train.py --task Isaac-LAV2-Base-v0 --headless env.scene.num_envs=4096 agent.seed=2024
Isaac Lab also keeps convenience CLI flags such as --num_envs, --seed, and
--max_iterations. These explicit flags take precedence over Hydra overrides.
Practical Advice
num_envsis one of the highest-impact knobs for throughput and memory use.- Prefer
--headlessfor larger runs to reduce rendering overhead. - If you hit out-of-memory errors, reduce parallel environments first.
- If training becomes unstable or produces NaNs, inspect reward scale, action scale, reset randomization, controller gains, and solver-related settings.
References
- Isaac Lab Hydra guide: https://isaac-sim.github.io/IsaacLab/main/source/features/hydra.html
- Isaac Lab training guide: https://isaac-sim.github.io/IsaacLab/main/source/overview/reinforcement-learning/training_guide.html
- Isaac Lab manager-based RL env tutorial: https://isaac-sim.github.io/IsaacLab/main/source/tutorials/03_envs/create_manager_rl_env.html