Content Swiper for React Native is released

Check out our content swiper component for #reactnative

  • When we were implementing the Cheapp application we were looking for nice component for viewing images.
  • In our app we had upload indicator etc. built into our image viewer, so this component we released is pretty much built from the scratch to make it bit simpler and also to make it bit more customizable so it hopefully fits better for other peoples needs too.
  • Package includes Slide, SlideZoom, RotateY and Stack animations.
  • index=0 means that image is active and you can use those parameters to define enter/exit animations for your items as you like.
  • Here’s example of normal slide animation, but other animations are pretty much the same too.

Content Swiper component for React Native is out now
Continue reading “Content Swiper for React Native is released”

How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native

How HBO’s Silicon Valley built “Not Hotdog”, a real AI App:  via @timanglade #DataScience

  • The depth openness of the deep learning community, and the presence of talented minds like R.C. is what makes deep learning viable for applications today — but they also make working in this field more thrilling than any tech trend we’ve been involved with.Our final architecture ended up making significant departures from…
  • While this is a subject of some debate these days, our experiments placing BN after activation on small networks failed to converge as well.To optimize the network we used Cyclical Learning Rates and (fellow student) Brad Kenstler’s excellent Keras implementation.
  • This was hard to defend against as a) there just aren’t that many photographs of hotdogs in soft focus (we get hungry just thinking about it) and b) it could be damaging to spend too much of our network’s capacity training for soft focus, when realistically most images taken with…
  • Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit.Our data augmentation rules were as follows:We…
  • Phase 2 ran for 64 more epochs (4 CLR cycles with a step size of 8 epochs), with a learning rate between 0.0004 and 0.0045, on a triangular 2 policy.Phase 3 ran for 64 more epochs (4 CLR cycles with a step size of 8 epochs), with a learning rate…

How Silicon Valley build the real AI app that identifies hotdogs — and not hotdogs using mobile TensorFlow, Keras & React Native.
Continue reading “How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native”

How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native

  • The depth openness of the deep learning community, and the presence of talented minds like R.C. is what makes deep learning viable for applications today — but they also make working in this field more thrilling than any tech trend we’ve been involved with.Our final architecture ended up making significant departures from the MobileNets architecture or from convention, in particular:We do not use Batch Normalization Activation between depthwise and pointwise convolutions, because the XCeption paper (which discussed depthwise convolutions in detail) seemed to indicate it would actually lead to less accuracy in architecture of this type (as helpfully pointed out by the author of the QuickNet paper on Reddit).
  • While this is a subject of some debate these days, our experiments placing BN after activation on small networks failed to converge as well.To optimize the network we used Cyclical Learning Rates and (fellow student) Brad Kenstler’s excellent Keras implementation.
  • This was hard to defend against as a) there just aren’t that many photographs of hotdogs in soft focus (we get hungry just thinking about it) and b) it could be damaging to spend too much of our network’s capacity training for soft focus, when realistically most images taken with a mobile phone will not have that feature.
  • Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit.Our data augmentation rules were as follows:We applied rotations within Âą135 degrees — significantly more than average, because we coded the application to disregard phone orientation.Height and width shifts of 20%Shear range of 30%Zoom range of 10%Channel shifts of 20%Random horizontal flips to help the network generalizeThese numbers were derived intuitively, based on experiments and our understanding of the real-life usage of our app, as opposed to careful experimentation.The final key to our data pipeline was using Patrick Rodriguez’s multiprocess image data generator for Keras.
  • Phase 2 ran for 64 more epochs (4 CLR cycles with a step size of 8 epochs), with a learning rate between 0.0004 and 0.0045, on a triangular 2 policy.Phase 3 ran for 64 more epochs (4 CLR cycles with a step size of 8 epochs), with a learning rate between 0.000015 and 0.0002, on a triangular 2 policy.UPDATED: a previous version of this chart contained inaccurate learning rates.While learning rates were identified by running the linear experiment recommended by the CLR paper, they seem to intuitively make sense, in that the max for each phase is within a factor of 2 of the previous minimum, which is aligned with the industry standard recommendation of halving your learning rate if your accuracy plateaus during training.In the interest of time we performed some training runs on a Paperspace P5000 instance running Ubuntu.

How Silicon Valley build the real AI app that identifies hotdogs — and not hotdogs using mobile TensorFlow, Keras & React Native.
Continue reading “How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native”

How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native

How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native

  • While this is a subject of some debate these days, our experiments placing BN after activation on small networks failed to converge as well.To optimize the network we used Cyclical Learning Rates and (fellow student) Brad Kenstler’s excellent Keras implementation.
  • This was hard to defend against as a) there just aren’t that many photographs of hotdogs in soft focus (we get hungry just thinking about it) and b) it could be damaging to spend too much of our network’s capacity training for soft focus, when realistically most images taken with a mobile phone will not have that feature.
  • Of the remaining 147k images, most were of food, with just 3k photos of non-food items, to help the network generalize a bit more and not get tricked into seeing a hotdog if presented with an image of a human in a red outfit.Our data augmentation rules were as follows:We applied rotations within Âą135 degrees — significantly more than average, because we coded the application to disregard phone orientation.Height and width shifts of 20%Shear range of 30%Zoom range of 10%Channel shifts of 20%Random horizontal flips to help the network generalizeThese numbers were derived intuitively, based on experiments and our understanding of the real-life usage of our app, as opposed to careful experimentation.The final key to our data pipeline was using Patrick Rodriguez’s multiprocess image data generator for Keras.
  • Phase 2 ran for 64 more epochs (4 CLR cycles with a step size of 8 epochs), with a learning rate between 0.0004 and 0.0045, on a triangular 2 policy.Phase 3 ran for 64 more epochs (4 CLR cycles with a step size of 8 epochs), with a learning rate between 0.000015 and 0.0002, on a triangular 2 policy.While learning rates were identified by running the linear experiment recommended by the CLR paper, they seem to intuitively make sense, in that the max for each phase is within a factor of 2 of the previous minimum, which is aligned with the industry standard recommendation of halving your learning rate if your accuracy plateaus during training.We were able to perform some runs on a Paperspace P5000 instance in the interest of time.
  • In those cases, we were able to double the batch size, and found that optimal learning rates for each phase were roughly double as well.Running Neural Networks on Mobile PhonesEven having designed a relatively compact neural architecture, and having trained it to handle situations it may find in a mobile context, we had a lot of work left

How Silicon Valley build the real AI app that identifies hotdogs — and not hotdogs using mobile TensorFlow, Keras & React Native.
Continue reading “How HBO’s Silicon Valley built “Not Hotdog” with mobile TensorFlow, Keras & React Native”

Why props are your friend!

Why props are your friend! => 

{ Story by @RockChalkDev }

🍻 #ReactJS #JavaScript

  • The way this works is that when the user comes to the Home page they are greeted with four images that represent their respective collection.
  • When the user selects one of the four images on the Home page an async method, , is called.
  • This function is responsible for two things:

    When the user gets to the Portfolio page they are greeted with a Carousel displaying the images.

  • The Porfolio component is connected to the Redux store so it has access to the app state in our case we want .
  • Below you an see what looks like on a user’s selection:

    The Redux store doesn’t have that information avaliable in state.

This the first time I’ve ever done a blog post, I guess this is a blog post. I wanted to share this because I’m pretty impressed with my use of props &.
Continue reading “Why props are your friend!”

Resize local images with React Native

  • You can use an image from the camera roll or a base64 encoded image.
  • For example, base64 support for images or updates to support latest React Native versions were done by the community.
  • To resize local image files, we created react-native-image-resizer .
  • Our package helps you with this situation by allowing image resizing directly from React Native.
  • Whenever your app requires the user to take a picture with the camera , or from the camera roll, the image is so heavy that it takes ages to upload it on the server.

Rescale local image files in your React Native app, to reduce their size and upload them to a server.
Continue reading “Resize local images with React Native”