# Performance

*Numerics.NET* uses native, processor-specific code for its core computations. This gives you
performance comparable to the fastest code available.

For example, the classes in the Extreme.Mathematics.LinearAlgebra namespace use native BLAS and LAPACK routines wherever possible. BLAS stands for Basic Linear Algebra Subroutines, and is the de facto standard for core numerical linear algebra routines such as matrix and vector products. LAPACK stands for Linear Algebra PACKage, and is the standard for the more complex functionality such as matrix decompositions and eigenvalue problems. The BLAS and LAPACK interface is public. This means you can plug in your own implementation if desired. This is of particular importance if you wish to use the library on a non-Windows based platform.

All native routines also have managed equivalents. This code isn’t as fast as the native code, especially for larger problems. But it has the advantage of portability and a smaller memory footprint.

The tables below shows some performance benchmarks. The tests were run on a 3GHz Pentium IV with 512MB of RAM.

Benchmark results for processor-specific, native implementation:

Matrix size | 5x5 | 50x50 | 1000x1000 |

Number of iterations | 500.000 | 10.000 | 10 |

LU Decomposition | 2.05s | 1.31s | 2.17s |

QR Decomposition | 3.89s | 5.10s | 4.66s |

Matrix multiply | 0.37s | 0.78s | 4.22s |

Benchmark results for 100% managed implementation:

Matrix size | 5x5 | 50x50 | 1000x1000 |

Number of iterations | 500.000 | 10.000 | 10 |

LU Decomposition | 1.18s | 3.25 | 10.11s |

QR Decomposition | 2.57s | 9.25s | 45.30s |

Matrix multiply | 0.38s | 3.24s | 27.52s |