Version 1.4 of the Intel NPU Acceleration Library was released today as the Python library for use on Windows and Linux for interacting with the Intel Neural Processing Unit (NPU) for AI offloading on recent Intel Core Ultra processors.
With the intel-npu-acceleration-library 1.4 update there is support for several new features, new C++ code examples, documentation improvements, and more.
Among the new features within the Intel NPU 1.4 library update is support for operations on tensors, power and log softmax operations, a new MATMUL operation that is Torch compliant, support for the Phi-3 MLP layer, other new operations, and also a new “turbo mode”.
I was curious about this new “turbo mode” but from the perspective of this library is just setting a new turbo property that is passed on to the NPU driver. No other documentation or details in that pull request.
Those curious about all the Intel NPU Acceleration Library 1.4 changes or to try out this NPU library on Windows or Linux systems, visit the Intel GitHub repository.