Totally new ai.nix, integrating MIstral with ZED
This commit is contained in:
+83
-26
@@ -467,6 +467,7 @@ The tree below shows the full repository layout, with the standardized internal
|
||||
├── assets
|
||||
│ ├── conf
|
||||
│ │ ├── apps
|
||||
│ │ ├── ai.nix
|
||||
│ │ │ ├── flatpaks.conf
|
||||
│ │ │ └── packages.conf
|
||||
│ │ ├── core
|
||||
@@ -524,7 +525,6 @@ The tree below shows the full repository layout, with the standardized internal
|
||||
│ └── scripts
|
||||
├── configuration
|
||||
│ ├── apps
|
||||
│ │ ├── ai.nix
|
||||
│ │ ├── install_flatpaks.nix
|
||||
│ │ └── install_packages.nix
|
||||
│ ├── core
|
||||
@@ -858,7 +858,6 @@ This section describes the main system configuration for the computers that I ha
|
||||
{ pkgs, user, ... } :
|
||||
{
|
||||
imports = [
|
||||
./apps/ai.nix
|
||||
./apps/install_flatpaks.nix
|
||||
./apps/install_packages.nix
|
||||
./core/files.nix
|
||||
@@ -894,30 +893,6 @@ This section describes the main system configuration for the computers that I ha
|
||||
** Apps section
|
||||
This section describes a way of installing packages, either through nixpkgs orr flatpak. What apps to instal is decided in the files ./assets/conf/apps/packages.conf and flatpaks.conf
|
||||
|
||||
** ai.nix
|
||||
This module enables and configures the Ollama system service on NixOS, including optional GPU acceleration (CUDA or ROCm).
|
||||
It ensures the Ollama CLI is available system-wide for interacting with local models.
|
||||
It automatically pulls and prepares selected coding models (e.g., Qwen2.5-Coder and StarCoder2) at system activation.
|
||||
|
||||
#+begin_src nix :tangle configuration/apps/ai.nix :noweb tangle :mkdirp yes
|
||||
{ config, lib, pkgs, ... }:
|
||||
{
|
||||
services.ollama = {
|
||||
enable = true;
|
||||
package = pkgs.ollama-vulkan;
|
||||
loadModels = [
|
||||
"qwen2.5-coder:7b"
|
||||
"qwen2.5-coder:32b"
|
||||
"starcoder2:15b"
|
||||
];
|
||||
};
|
||||
environment.systemPackages = [
|
||||
pkgs.ollama-vulkan
|
||||
];
|
||||
}
|
||||
#+end_src
|
||||
|
||||
|
||||
** install_packages.nix
|
||||
#+begin_src nix :tangle configuration/apps/install_packages.nix :noweb tangle :mkdirp yes
|
||||
{ config, lib, pkgs, flakeRoot, inputs, ... }:
|
||||
@@ -1674,6 +1649,88 @@ This module will import all necessities.
|
||||
}
|
||||
#+end_src
|
||||
|
||||
** AI integrated with ZED
|
||||
This module enables and configures the Ollama system service on NixOS, including optional GPU acceleration (CUDA or ROCm).
|
||||
It ensures the Ollama CLI is available system-wide for interacting with local models.
|
||||
It automatically pulls and prepares selected coding models (e.g., Qwen2.5-Coder and StarCoder2) at system activation.
|
||||
|
||||
#+begin_src nix :tangle home/apps/ai.nix :noweb tangle :mkdirp yes
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
{
|
||||
# Install ZED and Ollama (Vulkan for CPU/AMD, use `ollama` for NVIDIA or `ollama-rocm` for AMD ROCm)
|
||||
home.packages = [
|
||||
pkgs.ollama-vulkan # For Vulkan (CPU/AMD). For NVIDIA: pkgs.ollama. For AMD ROCm: pkgs.ollama-rocm
|
||||
pkgs.zed
|
||||
];
|
||||
|
||||
# Environment variables for ZED and Ollama
|
||||
home.sessionVariables = {
|
||||
OLLAMA_HOST = "http://127.0.0.1:11434";
|
||||
MISTRAL_API_KEY = "CWo91GHwIClzLj6bCLQ69IioSi54PpTZ"; # Replace with your actual Mistral API key
|
||||
};
|
||||
|
||||
# Configure Ollama as a user service (starts with login)
|
||||
home.services.ollama = {
|
||||
enable = true;
|
||||
package = pkgs.ollama-vulkan;
|
||||
onStart = ''
|
||||
# Start Ollama server
|
||||
${pkgs.ollama-vulkan}/bin/ollama serve > /dev/null 2>&1 &
|
||||
sleep 5 # Wait for server to start
|
||||
|
||||
# Pull coding and chat models at startup
|
||||
${pkgs.ollama-vulkan}/bin/ollama pull codellama:70b # Best for coding
|
||||
${pkgs.ollama-vulkan}/bin/ollama pull mixtral:8x7b # Best for chat
|
||||
|
||||
# To pull additional models, uncomment or add lines below:
|
||||
# ${pkgs.ollama-vulkan}/bin/ollama pull llama3:8b # General-purpose
|
||||
# ${pkgs.ollama-vulkan}/bin/ollama pull qwen2.5-coder:7b # Multilingual coding
|
||||
# ${pkgs.ollama-vulkan}/bin/ollama pull qwen2.5-coder:32b # Larger coding model
|
||||
# ${pkgs.ollama-vulkan}/bin/ollama pull starcoder2:15b # Alternative for code
|
||||
'';
|
||||
};
|
||||
|
||||
# Configure ZED to use Ollama and Mistral API
|
||||
home.file.".config/zed/settings.json".text = lib.mkForce ''
|
||||
{
|
||||
"mistral": {
|
||||
"apiKey": "$MISTRAL_API_KEY", # Uses the environment variable set above
|
||||
"defaultModel": "mistral-pro" # Default model for Mistral API calls
|
||||
},
|
||||
"ollama": {
|
||||
"endpoint": "$OLLAMA_HOST", # Connects to local Ollama instance
|
||||
"defaultModel": "codellama:70b" # Default model for Ollama plugin
|
||||
},
|
||||
# Add other ZED plugin configurations here if needed
|
||||
}
|
||||
'';
|
||||
|
||||
# --- Notes ---
|
||||
# 1. Pulling Additional Models:
|
||||
# To pull more models later, run:
|
||||
# ollama pull <model-name>
|
||||
# Example: ollama pull llama3:8b
|
||||
|
||||
# 2. Switching GPU Backends:
|
||||
# - For NVIDIA: Replace `ollama-vulkan` with `ollama` (uses CUDA)
|
||||
# - For AMD: Use `ollama-rocm` and ensure ROCm is installed
|
||||
|
||||
# 3. ZED Plugin Setup:
|
||||
# - Install the Ollama and Mistral plugins in ZED via the plugin marketplace
|
||||
# - The Ollama plugin will use the models pulled above
|
||||
# - The Mistral plugin will use the MISTRAL_API_KEY for cloud access
|
||||
|
||||
# 4. Custom Prompts:
|
||||
# To add custom prompts for Ollama, create a prompts.json file or
|
||||
# configure prompts directly in the ZED Ollama plugin settings
|
||||
|
||||
# 5. Resource Management:
|
||||
# Ollama runs as a user service and stops when you log out
|
||||
# To run Ollama persistently, consider a systemd user service with `systemctl --user`
|
||||
}
|
||||
#+end_src
|
||||
|
||||
** NCSway
|
||||
Takes care of notifications
|
||||
#+begin_src nix :tangle home/desktop/ncsway.nix :noweb tangle :mkdirp yes
|
||||
|
||||
Reference in New Issue
Block a user