Understanding spatial relationships from visual input is fundamental to how humans perceive and navigate the world. Yet modern vision models still struggle with basic spatial reasoning tasks that come naturally to us — understanding occlusion, estimating relative distances, or predicting how objects interact in 3D space.

The Gap in Spatial Understanding

Current vision-language models excel at recognition tasks (identifying objects, describing scenes) but consistently underperform on spatial reasoning benchmarks. When asked “Is the red ball in front of or behind the blue box?”, even the best models often resort to statistical priors rather than genuine geometric reasoning.

We identify three core challenges:

  1. Viewpoint dependence — Models trained on web-scraped image-caption pairs rarely encounter explicit spatial descriptions, leading to viewpoint-agnostic representations.
  2. Scale ambiguity — Without depth supervision, transformers learn appearance features that conflate object size with distance.
  3. Compositional spatial relations — Understanding “A is to the left of B, which is above C” requires chaining spatial relations, a capability that doesn’t emerge from standard pre-training.

What We’re Working On

Our current research focuses on three directions:

Depth-Aware Attention

We’re experimenting with attention mechanisms that incorporate estimated depth maps as positional biases. Early results show that injecting geometric priors into the attention computation — rather than treating depth as a separate modality — leads to more robust spatial representations.

# Simplified depth-aware attention bias
def depth_attention_bias(depth_map, num_heads):
    """Convert depth map to relative depth biases for attention."""
    H, W = depth_map.shape
    # Compute pairwise relative depth
    depth_flat = depth_map.reshape(-1)
    rel_depth = depth_flat[:, None] - depth_flat[None, :]
    # Quantize into learnable buckets
    buckets = quantize_relative_depth(rel_depth, num_buckets=32)
    return bucket_embeddings[buckets]  # (H*W, H*W, num_heads)

Spatial Relation Benchmarks

Existing benchmarks mix spatial reasoning with other capabilities (object recognition, language understanding), making it hard to isolate spatial reasoning performance. We’re building SpatialBench, a diagnostic benchmark with:

  • Controlled scenes rendered with exact ground-truth spatial relations
  • Minimal language — simple templates that remove linguistic confounds
  • Progressive difficulty — from binary relations to multi-hop spatial chains

Self-Supervised Spatial Pre-Training

We’re exploring whether spatial reasoning can emerge from self-supervised objectives on video data. The key insight: temporal consistency in video provides a natural supervision signal for 3D structure.

ApproachSpatial Acc.Depth Est.Training Cost
CLIP baseline52.3%0.411x
+ Depth-aware attn67.8%0.291.2x
+ Video pre-training71.2%0.243.5x
+ Both (ours)74.6%0.214.1x

What’s Next

We’ll release SpatialBench and our depth-aware attention implementation in the coming weeks. We’re also exploring how these spatial representations transfer to downstream robotics tasks — stay tuned.


This is an ongoing research direction. We welcome collaboration — if you’re working on spatial reasoning, reach out at research@clawvision.org.