Artificial intelligence
Image recognition using Tensorflow

This page is outdated. Please Please visit here to see the use case of Rust function in AI.

This example shows how to write a Rust function for image recognition, and then offer this function as AI-as-a-Service.
Using machine learning libraries in Rust, such as the Tract crate which supports both Tensorflow and ONNX inference model, we can write AI-as-a-Service functions in Node.js. The functions could take AI models and input data, and return inference results, such as recognized objects on an input image, through a web service.
The example project source code is here.
The following Rust function does the inference.
  • The infer() function takes raw bytes for an already-trained Tensorflow model from ImageNet, and an input image.
  • The infer_impl() function resizes the image, applies the model to it, and returns the top matched label and probability. The label indicates an object the ImageNet model has been trained to recognize.
1
use wasm_bindgen::prelude::*;
2
use tract_tensorflow::prelude::*;
3
use std::io::Cursor;
4
5
#[wasm_bindgen]
6
pub fn infer(model_data: &[u8], image_data: &[u8]) -> String {
7
let res: (f32, u32) = infer_impl (model_data, image_data, 224, 224).unwrap();
8
return serde_json::to_string(&res).unwrap();
9
}
10
11
fn infer_impl (model_data: &[u8], image_data: &[u8], image_height: usize, image_width: usize) -> TractResult<(f32, u32)> {
12
// load the model
13
let mut model_data_mut = Cursor::new(model_data);
14
let mut model = tract_tensorflow::tensorflow().model_for_read(&mut model_data_mut)?;
15
model.set_input_fact(0, InferenceFact::dt_shape(f32::datum_type(), tvec!(1, image_height, image_width, 3)))?;
16
// optimize the model and get an execution plan
17
let model = model.into_optimized()?;
18
let plan = SimplePlan::new(&model)?;
19
20
// open image, resize it and make a Tensor out of it
21
let image = image::load_from_memory(image_data).unwrap().to_rgb();
22
let resized = image::imageops::resize(&image, image_height as u32, image_width as u32, ::image::imageops::FilterType::Triangle);
23
let image: Tensor = tract_ndarray::Array4::from_shape_fn((1, image_height, image_width, 3), |(_, y, x, c)| {
24
resized[(x as _, y as _)][c] as f32 / 255.0
25
})
26
.into();
27
28
// run the plan on the input
29
let result = plan.run(tvec!(image))?;
30
31
// find and display the max value with its index
32
let best = result[0]
33
.to_array_view::<f32>()?
34
.iter()
35
.cloned()
36
.zip(1..)
37
.max_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
38
match best {
39
Some(t) => Ok(t),
40
None => Ok((0.0, 0)),
41
}
42
}
Copied!
The Javascript function reads the model and image files, and calls the Rust function.
1
const { infer } = require('../pkg/csdn_ai_demo_lib.js');
2
3
const fs = require('fs');
4
var data_model = fs.readFileSync("mobilenet_v2_1.4_224_frozen.pb");
5
var data_img_cat = fs.readFileSync("cat.png");
6
var data_img_hopper = fs.readFileSync("grace_hopper.jpg");
7
8
var result = JSON.parse( infer(data_model, data_img_hopper) );
9
console.log("Detected object id " + result[1] + " with probability " + result[0]);
10
11
var result = JSON.parse( infer(data_model, data_img_cat) );
12
console.log("Detected object id " + result[1] + " with probability " + result[0]);
Copied!
Next, build it with ssvmup, and then run the Javascript file in Node.js.
1
$ ssvmup build
2
$ cd node
3
$ node app.js
4
Detected object id 654 with probability 0.3256046
5
Detected object id 284 with probability 0.27039126
Copied!
You can look up the output detected object ID from the imagenet_slim_labels.txt file from ImageNet.
1
... ...
2
284 tiger cat
3
... ...
4
654 military uniform
5
... ...
Copied!
Now, it should be easy for you to turn this example into a Node.js-based web service so that users can send in images and detect objects!
Last modified 1yr ago
Copy link