Skip to content
This repository was archived by the owner on Apr 6, 2020. It is now read-only.

Commit 7692160

Browse files
committed
fixed merge conflicts and updated readme
2 parents e65cf89 + 4c2326f commit 7692160

File tree

8 files changed

+62
-4
lines changed

8 files changed

+62
-4
lines changed

README.md

Lines changed: 59 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,69 @@ python -m http.server
3131

3232
If you don't know how to start a server, check [this](https://github.com/processing/p5.js/wiki/Local-server) guide.
3333

34+
## Examples Index
35+
36+
Below is the current `release` examples index:
37+
38+
### javascript
39+
40+
ml5.js does not require p5.js, however as ml5.js and p5.js have been designed to play nicely with eachother, most of our examples currently are developed together with p5.js. The following "vanilla" javascript examples showcase the use of ml5 without p5.js.
41+
42+
* [FeatureExtractor_Image_Classification](https://ml5js.github.io/ml5-examples/javascript/FeatureExtractor_Image_Classification)
43+
* [ImageClassification_Video](https://ml5js.github.io/ml5-examples/javascript/ImageClassification_Video)
44+
* [ImageClassification](https://ml5js.github.io/ml5-examples/javascript/ImageClassification)
45+
* [StyleTransfer_Image](https://ml5js.github.io/ml5-examples/javascript/StyleTransfer_Image)
46+
* [PoseNet](https://ml5js.github.io/ml5-examples/javascript/PoseNet)
47+
48+
### p5js
49+
50+
* [CVAE](https://ml5js.github.io/ml5-examples/p5js/CVAE)
51+
* [BodyPix_Image](https://ml5js.github.io/ml5-examples/p5js/BodyPix/BodyPix_Image)
52+
* [BodyPix_Webcam](https://ml5js.github.io/ml5-examples/p5js/BodyPix/BodyPix_Webcam)
53+
* [BodyPix_Webcam_Parts](https://ml5js.github.io/ml5-examples/p5js/BodyPix/BodyPix_Webcam_Parts)
54+
* [DCGAN](https://ml5js.github.io/ml5-examples/p5js/DCGAN)
55+
* [Sentiment](https://ml5js.github.io/ml5-examples/p5js/Sentiment)
56+
* [UNET](https://ml5js.github.io/ml5-examples/p5js/UNET/UNET_webcam)
57+
* [Word2Vec](https://ml5js.github.io/ml5-examples/p5js/Word2Vec)
58+
* [FeatureExtractor_Image_Classification](https://ml5js.github.io/ml5-examples/p5js/FeatureExtractor/FeatureExtractor_Image_Classification)
59+
* [FeatureExtractor_Image_Regression](https://ml5js.github.io/ml5-examples/p5js/FeatureExtractor/FeatureExtractor_Image_Regression)
60+
* [StyleTransfer_Video](https://ml5js.github.io/ml5-examples/p5js/StyleTransfer/StyleTransfer_Video)
61+
* [StyleTransfer_Image](https://ml5js.github.io/ml5-examples/p5js/StyleTransfer/StyleTransfer_Image)
62+
* [ImageClassification_Video](https://ml5js.github.io/ml5-examples/p5js/ImageClassification/ImageClassification_Video)
63+
* [ImageClassification_VideoScavengerHunt](https://ml5js.github.io/ml5-examples/p5js/ImageClassification/ImageClassification_VideoScavengerHunt)
64+
* [ImageClassification](https://ml5js.github.io/ml5-examples/p5js/ImageClassification/ImageClassification)
65+
* [ImageClassification_VideoSoundTranslate](https://ml5js.github.io/ml5-examples/p5js/ImageClassification/ImageClassification_VideoSoundTranslate)
66+
* [ImageClassification_VideoSound](https://ml5js.github.io/ml5-examples/p5js/ImageClassification/ImageClassification_VideoSound)
67+
* [ImageClassification_MultipleImages](https://ml5js.github.io/ml5-examples/p5js/ImageClassification/ImageClassification_MultipleImages)
68+
* [KNNClassification_VideoSound](https://ml5js.github.io/ml5-examples/p5js/KNNClassification/KNNClassification_VideoSound)
69+
* [KNNClassification_Video](https://ml5js.github.io/ml5-examples/p5js/KNNClassification/KNNClassification_Video)
70+
* [KNNClassification_PoseNet](https://ml5js.github.io/ml5-examples/p5js/KNNClassification/KNNClassification_PoseNet)
71+
* [KNNClassification_VideoSquare](https://ml5js.github.io/ml5-examples/p5js/KNNClassification/KNNClassification_VideoSquare)
72+
* [SketchRNN_basic](https://ml5js.github.io/ml5-examples/p5js/SketchRNN/SketchRNN_basic)
73+
* [SketchRNN_interactive](https://ml5js.github.io/ml5-examples/p5js/SketchRNN/SketchRNN_interactive)
74+
* [PitchDetection_Game](https://ml5js.github.io/ml5-examples/p5js/PitchDetection/PitchDetection_Game)
75+
* [PitchDetection](https://ml5js.github.io/ml5-examples/p5js/PitchDetection/PitchDetection)
76+
* [CharRNN_Interactive](https://ml5js.github.io/ml5-examples/p5js/CharRNN/CharRNN_Interactive)
77+
* [CharRNN_Text](https://ml5js.github.io/ml5-examples/p5js/CharRNN/CharRNN_Text)
78+
* [CharRNN_Text_Stateful](https://ml5js.github.io/ml5-examples/p5js/CharRNN/CharRNN_Text_Stateful)
79+
* [Pix2Pix_callback](https://ml5js.github.io/ml5-examples/p5js/Pix2Pix/Pix2Pix_callback)
80+
* [Pix2Pix_promise](https://ml5js.github.io/ml5-examples/p5js/Pix2Pix/Pix2Pix_promise)
81+
* [YOLO_webcam](https://ml5js.github.io/ml5-examples/p5js/YOLO/YOLO_webcam)
82+
* [YOLO_single_image](https://ml5js.github.io/ml5-examples/p5js/YOLO/YOLO_single_image)
83+
* [images](https://ml5js.github.io/ml5-examples/p5js/YOLO/YOLO_single_image/images)
84+
* [PoseNet_image_single](https://ml5js.github.io/ml5-examples/p5js/PoseNet/PoseNet_image_single)
85+
* [data](https://ml5js.github.io/ml5-examples/p5js/PoseNet/PoseNet_image_single/data)
86+
* [PoseNet_webcam](https://ml5js.github.io/ml5-examples/p5js/PoseNet/PoseNet_webcam)
87+
* [PoseNet_part_selection](https://ml5js.github.io/ml5-examples/p5js/PoseNet/PoseNet_part_selection)
88+
3489
## p5.js web editor examples
3590

3691
The p5.js examples can also be run using the [p5.js web editor](https://alpha.editor.p5js.org). We are [in the process of porting](https://github.com/ml5js/ml5-examples/issues/6) and would welcome any contributions!
3792

38-
* [PoseNet Example](https://alpha.editor.p5js.org/ml5/sketches/B1uDXDugX)
39-
* [YOLO Example](https://alpha.editor.p5js.org/ml5/sketches/HyKg7DOe7)
93+
You can find all of our examples here:
94+
* [ml5 on editor.p5js.org](https://editor.p5js.org/ml5/sketches)
95+
96+
NOTE: not all of the ml5.js examples are currently working on the p5.js web editor. Stay tuned for updates!
4097

4198
## Contributing
4299

p5js/DCGAN/model/group1-shard1of4

4 MB
Binary file not shown.

p5js/DCGAN/model/group1-shard2of4

4 MB
Binary file not shown.

p5js/DCGAN/model/group1-shard3of4

4 MB
Binary file not shown.

p5js/DCGAN/model/group1-shard4of4

2.68 MB
Binary file not shown.

p5js/DCGAN/model/model.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
{"modelTopology": {"keras_version": "2.2.2", "backend": "tensorflow", "model_config": {"class_name": "Sequential", "config": [{"class_name": "Dense", "config": {"name": "dense_2", "trainable": true, "batch_input_shape": [null, 128], "dtype": "float32", "units": 8192, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "BatchNormalization", "config": {"name": "batch_normalization_5", "trainable": true, "axis": 1, "momentum": 0.99, "epsilon": 0.001, "center": true, "scale": true, "beta_initializer": {"class_name": "Zeros", "config": {}}, "gamma_initializer": {"class_name": "Ones", "config": {}}, "moving_mean_initializer": {"class_name": "Zeros", "config": {}}, "moving_variance_initializer": {"class_name": "Ones", "config": {}}, "beta_regularizer": null, "gamma_regularizer": null, "beta_constraint": null, "gamma_constraint": null}}, {"class_name": "Activation", "config": {"name": "activation_6", "trainable": true, "activation": "relu"}}, {"class_name": "Reshape", "config": {"name": "reshape_2", "trainable": true, "target_shape": [512, 4, 4]}}, {"class_name": "Conv2DTranspose", "config": {"name": "conv2d_transpose_5", "trainable": true, "filters": 256, "kernel_size": [4, 4], "strides": [2, 2], "padding": "same", "data_format": "channels_first", "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null, "output_padding": null}}, {"class_name": "BatchNormalization", "config": {"name": "batch_normalization_6", "trainable": true, "axis": 1, "momentum": 0.99, "epsilon": 0.001, "center": true, "scale": true, "beta_initializer": {"class_name": "Zeros", "config": {}}, "gamma_initializer": {"class_name": "Ones", "config": {}}, "moving_mean_initializer": {"class_name": "Zeros", "config": {}}, "moving_variance_initializer": {"class_name": "Ones", "config": {}}, "beta_regularizer": null, "gamma_regularizer": null, "beta_constraint": null, "gamma_constraint": null}}, {"class_name": "Activation", "config": {"name": "activation_7", "trainable": true, "activation": "relu"}}, {"class_name": "Conv2DTranspose", "config": {"name": "conv2d_transpose_6", "trainable": true, "filters": 128, "kernel_size": [4, 4], "strides": [2, 2], "padding": "same", "data_format": "channels_first", "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null, "output_padding": null}}, {"class_name": "BatchNormalization", "config": {"name": "batch_normalization_7", "trainable": true, "axis": 1, "momentum": 0.99, "epsilon": 0.001, "center": true, "scale": true, "beta_initializer": {"class_name": "Zeros", "config": {}}, "gamma_initializer": {"class_name": "Ones", "config": {}}, "moving_mean_initializer": {"class_name": "Zeros", "config": {}}, "moving_variance_initializer": {"class_name": "Ones", "config": {}}, "beta_regularizer": null, "gamma_regularizer": null, "beta_constraint": null, "gamma_constraint": null}}, {"class_name": "Activation", "config": {"name": "activation_8", "trainable": true, "activation": "relu"}}, {"class_name": "Conv2DTranspose", "config": {"name": "conv2d_transpose_7", "trainable": true, "filters": 64, "kernel_size": [4, 4], "strides": [2, 2], "padding": "same", "data_format": "channels_first", "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null, "output_padding": null}}, {"class_name": "BatchNormalization", "config": {"name": "batch_normalization_8", "trainable": true, "axis": 1, "momentum": 0.99, "epsilon": 0.001, "center": true, "scale": true, "beta_initializer": {"class_name": "Zeros", "config": {}}, "gamma_initializer": {"class_name": "Ones", "config": {}}, "moving_mean_initializer": {"class_name": "Zeros", "config": {}}, "moving_variance_initializer": {"class_name": "Ones", "config": {}}, "beta_regularizer": null, "gamma_regularizer": null, "beta_constraint": null, "gamma_constraint": null}}, {"class_name": "Activation", "config": {"name": "activation_9", "trainable": true, "activation": "relu"}}, {"class_name": "Conv2DTranspose", "config": {"name": "conv2d_transpose_8", "trainable": true, "filters": 3, "kernel_size": [4, 4], "strides": [2, 2], "padding": "same", "data_format": "channels_first", "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null, "output_padding": null}}, {"class_name": "Activation", "config": {"name": "activation_10", "trainable": true, "activation": "tanh"}}]}}, "weightsManifest": [{"paths": ["group1-shard1of4", "group1-shard2of4", "group1-shard3of4", "group1-shard4of4"], "weights": [{"name": "batch_normalization_5/gamma", "shape": [8192], "dtype": "float32"}, {"name": "batch_normalization_5/beta", "shape": [8192], "dtype": "float32"}, {"name": "batch_normalization_5/moving_mean", "shape": [8192], "dtype": "float32"}, {"name": "batch_normalization_5/moving_variance", "shape": [8192], "dtype": "float32"}, {"name": "batch_normalization_6/gamma", "shape": [256], "dtype": "float32"}, {"name": "batch_normalization_6/beta", "shape": [256], "dtype": "float32"}, {"name": "batch_normalization_6/moving_mean", "shape": [256], "dtype": "float32"}, {"name": "batch_normalization_6/moving_variance", "shape": [256], "dtype": "float32"}, {"name": "batch_normalization_7/gamma", "shape": [128], "dtype": "float32"}, {"name": "batch_normalization_7/beta", "shape": [128], "dtype": "float32"}, {"name": "batch_normalization_7/moving_mean", "shape": [128], "dtype": "float32"}, {"name": "batch_normalization_7/moving_variance", "shape": [128], "dtype": "float32"}, {"name": "batch_normalization_8/gamma", "shape": [64], "dtype": "float32"}, {"name": "batch_normalization_8/beta", "shape": [64], "dtype": "float32"}, {"name": "batch_normalization_8/moving_mean", "shape": [64], "dtype": "float32"}, {"name": "batch_normalization_8/moving_variance", "shape": [64], "dtype": "float32"}, {"name": "conv2d_transpose_5/kernel", "shape": [4, 4, 256, 512], "dtype": "float32"}, {"name": "conv2d_transpose_5/bias", "shape": [256], "dtype": "float32"}, {"name": "conv2d_transpose_6/kernel", "shape": [4, 4, 128, 256], "dtype": "float32"}, {"name": "conv2d_transpose_6/bias", "shape": [128], "dtype": "float32"}, {"name": "conv2d_transpose_7/kernel", "shape": [4, 4, 64, 128], "dtype": "float32"}, {"name": "conv2d_transpose_7/bias", "shape": [64], "dtype": "float32"}, {"name": "conv2d_transpose_8/kernel", "shape": [4, 4, 3, 64], "dtype": "float32"}, {"name": "conv2d_transpose_8/bias", "shape": [3], "dtype": "float32"}, {"name": "dense_2/kernel", "shape": [128, 8192], "dtype": "float32"}, {"name": "dense_2/bias", "shape": [8192], "dtype": "float32"}]}]}

p5js/DCGAN/sketch.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,4 +38,4 @@ function displayImage(err, result) {
3838
return;
3939
}
4040
image(result.image, 0, 0, 200, 200);
41-
}
41+
}

p5js/FeatureExtractor/FeatureExtractor_Image_Classification/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,4 +33,4 @@ <h1>Image Classification using Feature Extractor with MobileNet</h1>
3333
</p>
3434
<script src="sketch.js"></script>
3535
</body>
36-
</html>
36+
</html>

0 commit comments

Comments
 (0)