TABULA

TABULA: Multi-Model AI Cognitive Brain

Overview

TABULA is an experimental work-in-progress project aimed at developing a multi-model AI cognitive brain capable of autonomous learning through real-world interactions. The system is designed to form thoughts, be motivated by curiosity, and develop its own symbolic representations for objects, experiences, and actions through its auditory and visual cortex modules.

Project Objectives

Architecture

Current Development

The project currently focuses on two primary sensory processing systems:

šŸ”Š Auditory Cortex

The auditory cortex provides the model with audio embeddings necessary for pattern recognition and symbol creation in memory. Key components include:

šŸ‘ļø Visual Cortex

The visual cortex implements a fovea-inspired computer vision system designed for computational efficiency:

Roadmap

To-Do List

Installation

Coming soon - Installation instructions will be provided as components reach stable releases

Documentation

Auditory Cortex Components

Training Documentation

Inference & Usage

Visual Cortex Components

Usage

For practical usage of the current components:

  1. Audio/Video Separation: See the Oneshot Inference Pipeline for separating voice and noise from audio or video files

  2. Model Training: Refer to the training documentation for each component:

  3. Architecture Details: For understanding the system architecture:

Contributing

This is an experimental research project. Contributions and ideas are welcome. Please open an issue to discuss major changes before submitting pull requests.

License

MIT License

Status

🚧 Work in Progress - This project is in active experimental development. Components and APIs may change significantly as the architecture evolves.