First, all the dependencies and imports we'll need.
:dep image = "0.24.5"
// From https://gitlab.com/landreville/ciede2000
:dep ciede2000 = { path = "~/workspace/CIEDE2000" }
// From https://gitlab.com/landreville/antco
:dep antco = { path = "~/workspace/antco" }
:dep evcxr_image = "1.1"
:dep evcxr_runtime = "1.1.0"
:dep lab = "0.11.0"
use std::io::Cursor;
use std::collections::HashMap;
use lab;
use image;
use image::io::Reader as ImageReader;
use image::{ImageBuffer, Pixel, Rgb, RgbImage};
use ciede2000::ciede2000;
use antco::{aco, AcoParameters, distance_matrix};
use evcxr_image::ImageDisplay;
use evcxr_runtime;
I took this photo of a fuschia blossom shortly after I learned that growing fuschia was an option.
let mut img = ImageReader::open("fuchsia.jpg")?.decode()?;
let mut bytes: Vec<u8> = Vec::new();
img.write_to(&mut Cursor::new(&mut bytes), image::ImageOutputFormat::Png)?;
evcxr_runtime::mime_type("image/png").bytes(&bytes)
I use the image crate to write the image out with heavy Jpeg compression. This crushes the colours making similar colours near eachother come out as a middle-ground between them. See the banding happening as the colours only shift when they become too distance from eachother.
let mut bytes: Vec<u8> = Vec::new();
img.write_to(&mut Cursor::new(&mut bytes), image::ImageOutputFormat::Jpeg(15))?;
evcxr_runtime::mime_type("image/jpeg").bytes(&bytes)
Take a sampling of colours from the image. I want about 200 colours in by gradient, so I will only take the 200 most common colours from the compressed image.
img = ImageReader::new(&mut Cursor::new(&mut bytes)).with_guessed_format()?.decode()?;
let rgbimage: RgbImage = img.to_rgb8();
let mut pixels: HashMap<Rgb<u8>, usize> = HashMap::new();
for pixel in rgbimage.pixels() {
pixels.entry(pixel.to_rgb()).and_modify(|cnt| *cnt += 1).or_insert(0);
}
let mut sorted_pixels: Vec<(Rgb<u8>, usize)> = pixels.into_iter().collect();
sorted_pixels.sort_by(|a, b| (a.1).partial_cmp(&b.1).unwrap());
sorted_pixels.reverse();
let mut rgb_sample: Vec<Rgb<u8>> = Vec::new();
for (i, (pixel, _)) in sorted_pixels.iter().enumerate() {
if rgb_sample.len() >= 200 {
break;
}
if ((i as f32 / sorted_pixels.len() as f32) < 0.5){
rgb_sample.push(*pixel);
}
}
()
The following image is the selected colours in no particular order.
let doubled_rgb_sample: Vec<Rgb<u8>> = rgb_sample.iter().cloned().flat_map(|n| std::iter::repeat(n).take(2)).collect();
let mut tmp_img: RgbImage = ImageBuffer::from_fn(400, 400, |x, _| doubled_rgb_sample[x as usize] );
let mut bytes: Vec<u8> = Vec::new();
tmp_img.write_to(&mut Cursor::new(&mut bytes), image::ImageOutputFormat::Jpeg(100))?;
evcxr_runtime::mime_type("image/jpeg").bytes(&bytes)
Sort the colours using Euclidean distance and output the result. There's a gradient from dark to light, but clearly shows bands that alternate between red and green.
rgb_sample.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap())
()
let doubled_rgb_sample: Vec<Rgb<u8>> = rgb_sample.iter().cloned().flat_map(|n| std::iter::repeat(n).take(2)).collect();
let mut tmp_img: RgbImage = ImageBuffer::from_fn(400, 400, |x, _| doubled_rgb_sample[x as usize] );
let mut bytes: Vec<u8> = Vec::new();
tmp_img.write_to(&mut Cursor::new(&mut bytes), image::ImageOutputFormat::Jpeg(100))?;
evcxr_runtime::mime_type("image/jpeg").bytes(&bytes)
I'm going to convert the colours from RGB to CIELAB colour space. This colour space includes a measurement of perceptual lightness that can help ensure that light and dark colours do not come up together and upset the gradient. These next two functions convert RGB to LAB and vice versa. The math required was helpfully documented on this website http://www.brucelindbloom.com/. Including a helpful note about how the original specification for CIELAB had a rounding error that needed to be fixed.
fn rgb_to_lab(rgb: image::Rgb<u8>) -> (f32, f32, f32) {
let e = 216.0 / 24389.0;
let k = 24389.0 / 27.0;
let conv = |c: f32| if c <= 0.04045 { c / 12.92 } else { ((c + 0.055) / 1.055).powf(2.4) };
let r = conv(rgb.0[0] as f32 / 255.0);
let g = conv(rgb.0[1] as f32 / 255.0);
let b = conv(rgb.0[2] as f32 / 255.0);
let x = 0.4124 * r + 0.3576 * g + 0.1805 * b;
let y = 0.2126 * r + 0.7152 * g + 0.0722 * b;
let z = 0.0193 * r + 0.1192 * g + 0.9505 * b;
let labconv = |t: f32| if t > e { t.powf(1.0/3.0) } else { (k*t + 16.0) / 116.0 };
let fy = labconv(y);
let l = 116.0 * fy - 16.0;
let a = 500.0 * (labconv(x/0.9481) - fy);
let b = 200.0 * (fy - labconv(z/1.073));
(l, a, b)
}
let mut lab_sample: Vec<(f32, f32, f32)> = rgb_sample.iter().map(|rgb| rgb_to_lab(*rgb)).collect();
fn lab_to_rgb(lab: (f32, f32, f32)) -> Rgb<u8> {
let e = 216.0 / 24389.0;
let k = 24389.0 / 27.0;
let (l, a, b) = lab;
let fy = (l + 16.0) / 116.0;
let fx = a / 500.0 + fy;
let fz = fy - b / 200.0;
let fx3 = fx.powi(3);
let fz3 = fz.powi(3);
let x = if fx3 > e { fx3 } else { (116.0 * fx - 16.0) / k };
let y = if l > k * e { fy.powi(3) } else { l / k };
let z = if fz3 > e { fz3 } else { (116.0 * fz - 16.0) / k };
let r = 3.2406 * x + -1.5372 * y + -0.4986 * z;
let g = -0.9689 * x + 1.8758 * y + 0.0415 * z;
let b = 0.0557 * x + -0.2040 * y + 1.0570 * z;
let scale = |c: f32| if c <= 0.0031308 { 12.92 * c } else { 1.055 * c.powf(1.0/2.4) - 0.055 };
let r = scale(r).max(0.0);
let g = scale(g).max(0.0);
let b = scale(b).max(0.0);
Rgb([(r * 255.0).round() as u8, (g * 255.0).round() as u8, (b * 255.0).round() as u8])
}
lab_sample.sort_by(|a, b| (a.0).partial_cmp(&(b.0)).unwrap());
After converting to LAB and sorting the colours by lightness there is a clear gradient from dark to light, but still the colours are mixed together.
let doubled_sample: Vec<(f32, f32, f32)> = lab_sample.iter().cloned().flat_map(|n| std::iter::repeat(n).take(2)).collect();
let mut tmp_img: RgbImage = ImageBuffer::from_fn(400, 400, |x, _| lab_to_rgb(doubled_sample[x as usize]));
let mut tmp_bytes: Vec<u8> = Vec::new();
tmp_img.write_to(&mut Cursor::new(&mut tmp_bytes), image::ImageOutputFormat::Jpeg(100))?;
evcxr_runtime::mime_type("image/jpeg").bytes(&tmp_bytes)
Now that we're using CIELAB colour space we can also use a distance calculation that is based on visible perception. I use an Ant Colony Optimization algorithm, because the optimal ordering of the colours will be the shortest perceptual route between them and ACO is great for the traveling salesperson problem. This is the real meat that I wanted to port from Python to Rust. The algorithm is available at https://gitlab.com/landreville/antco.
fn run_aco(destinations: Vec<(f32,f32,f32)>) -> Vec<(f32, f32, f32)> {
let params = AcoParameters {
alpha: 1,
beta: 5,
q: 500.0,
ant_count: 1024,
evaporation_rate: 0.2,
};
let distances = distance_matrix(&destinations, |a: &(f32, f32, f32), b: &(f32, f32, f32)| -> f64 { ciede2000(a, b) as f64 });
let result: Vec<&(f32, f32, f32)> = aco(
&destinations,
&distances,
AcoParameters::default()
);
result.iter().map(|x| *x.clone()).collect()
}
// The Jupyter notebook functionality has some impact on lifetimes for inter-cell scoped variables.
// Putting run_aco into a function solved that.
let result = run_aco(lab_sample);
This is the final result that I also found with the original Python version. The gradients are natural to perception without any specific banding. Of course there are still bands, because no new information is added such as transitions between the colours available in the image.
let doubled_sample: Vec<(f32, f32, f32)> = result.iter().cloned().flat_map(|n| std::iter::repeat(n).take(2)).collect();
let mut tmp_img: RgbImage = ImageBuffer::from_fn(400, 400, |x, _| lab_to_rgb(doubled_sample[x as usize]));
let mut tmp_bytes: Vec<u8> = Vec::new();
tmp_img.write_to(&mut Cursor::new(&mut tmp_bytes), image::ImageOutputFormat::Jpeg(100))?;
evcxr_runtime::mime_type("image/jpeg").bytes(&tmp_bytes)