Growth models#

All models are subclasses of AbstractGrowthModel and are passed inside a ModelSpec.

Built-in model registry#

KinBiont ships a library of pre-implemented growth models accessible via MODEL_REGISTRY:

from pykinbiont import MODEL_REGISTRY

# List all available model names
print(list(MODEL_REGISTRY.keys()))

# Inspect a model
m = MODEL_REGISTRY["NL_logistic"]
print(type(m).__name__)    # NLModel
print(m.param_names)       # ['N_max', 'growth_rate', 'lag']

MODEL_REGISTRY is populated lazily on first access (triggers Julia startup if Julia isn’t running yet). Keys with the NL_ prefix are closed-form NLModel instances; all others are ODEModel instances.

Common built-in models#

Key

Type

Description

NL_logistic

NLModel

3-parameter logistic (Verhulst)

NL_gompertz

NLModel

Gompertz with lag phase

NL_richards

NLModel

Generalised logistic (Richards)

NL_baranyi

NLModel

Baranyi-Roberts with lag phase

NL_huang

NLModel

Huang model

NL_exp

NLModel

Simple exponential

Run list(MODEL_REGISTRY.keys()) in your environment for the full current list.

LogLinModel#

LogLinModel fits the exponential phase only, using log-linear regression. No initial parameter guess or bounds are needed — KinBiont detects the exponential window automatically.

from pykinbiont import LogLinModel, ModelSpec

spec = ModelSpec(models=[LogLinModel()], params=[[]])

Use this as a quick baseline or when only the growth rate (not carrying capacity or lag) matters.

NLModel — custom nonlinear function#

Wrap any Python callable f(p, t) -> y as an NLModel:

import numpy as np
from pykinbiont import NLModel, ModelSpec

def gompertz(p, t):
    """p = [A, mu, lam]"""
    A, mu, lam = p[0], p[1], p[2]
    return A * np.exp(-np.exp(mu * np.e / A * (lam - t) + 1))

custom = NLModel(
    name="gompertz",
    func=gompertz,
    param_names=["A", "mu", "lam"],
)

spec = ModelSpec(
    models=[custom],
    params=[[1.0, 0.5, 2.0]],
    lower=[[0.0, 0.0, 0.0]],
    upper=[[3.0, 3.0, 10.0]],
)

Function signature: func(p, t) where:

  • p — 1-D NumPy array of parameters

  • t — scalar or 1-D NumPy array of time points

  • Return value — scalar or 1-D NumPy array of OD values matching t

Performance note

Custom Python functions are called on every optimizer iteration. For large datasets or many multistart restarts, consider implementing the model in Julia directly and loading it via get_jl().

ODEModel — custom ODE#

For growth models defined as a system of ordinary differential equations, use ODEModel. The function signature follows the SciML in-place convention f!(du, u, p, t):

import numpy as np
from pykinbiont import ODEModel, ModelSpec

def logistic_ode(du, u, p, t):
    """
    Logistic ODE: dN/dt = mu * N * (1 - N/K)
    p = [mu, K]
    """
    mu, K = p[0], p[1]
    du[0] = mu * u[0] * (1.0 - u[0] / K)

ode_model = ODEModel(
    name="logistic_ode",
    func=logistic_ode,
    param_names=["mu", "K"],
    n_eq=1,   # number of state equations
)

spec = ModelSpec(
    models=[ode_model],
    params=[[0.5, 1.2]],
    lower=[[0.0, 0.1]],
    upper=[[5.0, 5.0]],
)

Function signature: func(du, u, p, t) where:

  • du — output array, modified in place (shape (n_eq,))

  • u — current state (shape (n_eq,))

  • p — parameter array (1-D NumPy)

  • t — scalar time

DDDEModel — data-driven ODE discovery#

DDDEModel uses sparse regression (STLSQ via DataDrivenDiffEq.jl) to discover an ODE from data without specifying the model form. No initial guess is needed.

from pykinbiont import DDDEModel, ModelSpec

ddde = DDDEModel(
    max_degree=4,       # max polynomial degree in candidate basis
    lambda_min=-5.0,    # log10 of min STLSQ threshold
    lambda_max=-1.0,    # log10 of max STLSQ threshold
    lambda_step=0.5,
)
spec = ModelSpec(models=[ddde], params=[[]])

Extra Julia packages required

DDDEModel requires DataDrivenDiffEq, DataDrivenSparse, and ModelingToolkit in the Julia environment. Install them once:

using Pkg
Pkg.add(["DataDrivenDiffEq", "DataDrivenSparse", "ModelingToolkit"])

## Multi-model comparison

Pass multiple models in one `ModelSpec`. KinBiont fits all of them and returns the one with
the lowest AICc as `best_model`:

```python
from pykinbiont import MODEL_REGISTRY, NLModel, ModelSpec
import numpy as np

logistic = MODEL_REGISTRY["NL_logistic"]
gompertz = MODEL_REGISTRY["NL_gompertz"]

spec = ModelSpec(
    models=[logistic, gompertz],
    params=[[1.2, 0.01, 0.5], [1.2, 0.5, 1.0]],
    lower=[[0.3, 1e-3, 0.05], [0.3, 0.0, 0.0]],
    upper=[[3.0, 0.05, 3.0],  [3.0, 3.0, 10.0]],
)

All candidate model results are accessible in CurveFitResult.all_results.