UI Updates for ColdSpotting!

ColdSpotting just release a brand new version! New live charting of pings help you isolate recent network conditions. We added a requested feature to allow users to set a custom Destination IP so that users can test specific network endpoints!

See No Evil

Here is our first test of a Realtime Logo Detection and Obfuscation in Augmented Reality. Doing some research on iOS for a way to remove advertisements from an AR context. I'm not sure how well this will scale, or how much training i'll need to improve it, but its fun to play with right now. What should I try? I could try some brand replacement. Maybe try some GLSL shaders instead of the current 2D plane.

Utilizing Viral Youtube Challenges as Curated Data Sets for Deep Learning

Google just published a really interesting article about how they developed their depth estimation algorithm using data from a popular viral "Mannequin challenge".   This popular YouTube challenge had people in a variety of scenarios holding rigid poses while a handheld camera moves through the scene.  This provides a fantastic data set as humans are usually the salient target of a camera and the complexity of kinetic human movement creates additional computational complexity that isn’t present in this data.  This challenge had diverse participation from all over the world and in vastly differing settings providing a particularly useful data set.

The results are incredible

The results are incredible

Using over 2000 videos they were able to achieve fantastic results when compared with other state-of-the-art depth estimation approaches.

dataset_compressed-min.gif

After looking at the successful utilization of these crowd-sourced data sets, what other utility can be drawn from other available viral video data sets?  

The ALS Ice Bucket Challenge and the Onset of Hypothermia

The first thing that came to my mind was the ALS Ice Bucket Challenge, in which participants are doused with ice water while there reactions are filmed.  This curated data set shares some of the valuable features of the Mannequin challenge, but instead offers us a different avenue of investigation.  Can we use data from these videos to detect the symptoms of hypothermia or other temperature induced maladies?  There are almost 2 million results when searching for the "Ice Bucket Challenge".  We have a remarkable opportunity to use these memes to generate valuable insights into human reactions to stimuli. 

Cinnamon Challenge and Respiratory inflammation

I don't advocate anyone give this one a try, but the Cinnamon Challenge had participants attempt to swallow a spoonful of cinnamon which would cause most individuals to violently cough and inevitably inhale fine particles of cinnamon.  The individuals experience a high degree of respiratory distress, and once again are captured on camera for us to analyze.  

Just looking through the list of viral challenges, a few look like they could provide valuable medical insights and may be worth investigating.

Ghost Pepper Challenge - Irritation/Nausea/Vomiting/Analgesic Reactions

Rotating Corn Challenge - Loose Teeth/Tooth Decay/Gum Disease

Tide Pod Challenge - Poisoning

Kylie Jenner Lip Challenge - Inflammation/Allergic reactions

Car Surfing Challenge - Scrapes/Lacerations/Bruising/Broken Bones/Overall Life Expectancy

What other Challenges can provide insight for us?

References:

Learning Depths of Moving People by Watching Frozen People

https://www.youtube.com/watch?v=fj_fK74y5_0

Moving Camera, Moving People: A Deep Learning Approach to Depth Prediction

https://ai.googleblog.com/2019/05/moving-camera-moving-people-deep.html

You can read "Learning the Depths of Moving People by Watching Frozen People" here:

https://arxiv.org/pdf/1904.11111.pdf

Acknowledgements

The research described in this post was done by Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu and Bill Freeman. We would like to thank Miki Rubinstein for his valuable feedback.

ColdSpotting - Wifi Network Diagnostics in Augmented Reality

Visualize and Diagnose Wifi Network Signal Strength with ColdSpotting.

Visualize the strength of your WIFI network in real-time and in Augmented Reality.
Find deadspots where your reception isn't well covered.
Diagnose poor network performance.
Perform site surveys to ensure high-reliability for mission critical events.
Test using real-world devices.
Compare coverage across manufactures and product lines.
 

Perfect for:

  • IT Installers
  • AV Systems Engineers
  • Home Entertainment Enthusiasts
  • Video Gamers
  • Network Administrators
  • Educational Institutions

Requires iOS 11.2 and and iPhone 7 and above.

You can sign up for our mailing list and a TestFlight invite here:

http://www.showblender.com/coldspottingmailinglist

More information will be added at

http://www.showblender.com/coldspotting

 

AI Bias - Continued studies with Image Recognition

This 4th of July weekend I had a chance to experiment with the Image Recognition Platform mentioned last week and on the blog.  Using a popular off the shelf(PaaS) image recognition service, I've begun submitting photos and capturing their provocative results.  The process brings up a lot of questions on the future of machine learning, with a focus on possible biases that may be introduced into such systems.  

Showcase:  Submission Photos with the Resulting Tags Sorted by Highest Confidence:

Example #1

Notable Results:  Detected the hat pretty well, gender, used the term 'worker', detected caucasian, adult. 

Does the algorithm have any correlations between the term 'worker' and gender or race?

Example# 2

Notable Results:  High confidence adult, male, caucasian, business, work, computer, internet, 20s, corporate, worker,

There is no computer or internet in the photo yet there are many career related results.  I'm not sure why this image evokes such responses (Are Ray-Bans highly correlated with the tech sector?).

Example #3

Notable Results: attractive, pretty, glamour, sexy, age and race are present, sensuality, cute, gorgeous,

It seems like the tags returned from images of women have very differing focus.  While male images return terms that reflect upon interests or jobs, images of women results often using qualitative descriptors of their physical bodies.  Model was listed, but not the tag worker, which we see more generally applied to male images.

Example #4

Notable Results: caucasian, attractive, adult, sexy, lady, fashion, 

Once again more measures of attractiveness.  Is their a correlation in this algorithm between gender/racial ideals of beauty?

Example #5

Notable Results:  high confidence results for white adult male, attractive, business, handsome, pretty, corporate, executive.

We are seeing some qualitative appearance results, but dramatically fewer than the females.  Seeing more business related terms as well, which comes in contrast with female career results.

Example: Dog Tax!

Notable Results: It did do an amazing job recognizing that Ellie is, in fact, a Chesapeake Bay Retriever.  (Also a possible hippo)

I will continue to submit more images and log their results(hopefully with some more diversity in the next round).  As technology progresses and becomes even more pervasive in our lives it becomes important to review the ethical implications along the way.  This sample size was too low to make any broader claims, but for me it points to some questions we should ask ourselves about the relationship between technology and human constructs such as race or gender.

Do you have any concerns on the future of AI?  Leave your comments below!

We will be adding future content at AIBias.com

AI Bias - Questions on the Future of Image Recognition

I'm interested in collaborating on a project about Bias in AI. I made a prototype of an Image Recognition app that detects and classifies objects in a photo. After a running a few tests I began to notice that race and gender were categories that occasionally would appear.

This made me more broadly curious about the practical implications of how AI/Machine Learning is designed and implemented, and the impact these choices could have in the future. There are many different image recognition platforms available to developers that approach the problem in differing ways. Some utilize metadata from curated image datasets, some use images shared on social media, and some use human resources(Mechanical Turk) to tag photos. How do these models differ with respect to inherent cultural, religious, and ethnic biases? The complicated process of classifying abstract more notions such as race, gender or emotion leave a lot of interpretation up to the viewer. Not to mention, the problem of the Null Set, in which ambiguous classifications may not be tagged leaving cruical information out of predictive models.
As a result of these different modes of classification:

What does this AI think a gender, or a race are?

How is the data seen as significant, and under what circumstances should it be used?

Should AI be designed such that it is "color blind"

Please let me know your thoughts in the comments below!

If you are interested in collaborating, or playing with the prototype that led to this discussion, join the mailing list at AIBias.com or Showblender.com


http://imgur.com/a/75qNn

Tech Note: Christie Spyder x20 VI Vertical Pixel Threshold Still Functionality

There is a known issue with x20 hardware which limits how Still Layers can be used in certain High Rez setups.

When configuring a frame with a VI Vertical Pixel count above 1850 pixels in Height, the system becomes limited with how it can utilize Still Stored images.

Only layers 3,6,11, and 14 can be used as still layers.  Using other layers will induce strange behavior, including cropping, random noise, and other various errors when adjusting Keyframe parameters.  

This is a hardware limitation with all x20s and all versions of Vista Advanced.

TL;DR-Exact wording via Vista Systems:  "If the vertical height is higher than 1850 only layers 3,6,11,14 can load stills. This is a hardware limitation for any version software."

I have also heard people suggest this limit was at 1856 pixels.  YMMV.

BugFix: Unreal Engine 4.10 Third Person Tutorial.

I've really been enjoying Unreal Engine and the Learning resources they provide.  Unfortunately the Engine is constantly updating and sometimes methods explained in these tutorials can change.  In this case Introduction to Third Person Blueprint Game has a couple issues(ie Using AnimNotify montage branching messages, branch point tracks, and using the DefaultSlot group for the layered blends).  I updated working solutions to a github repo .  

More information about branch tracking can be found here.

 

Spyder Console Crashing with Gefen Pro DVI Matrix Router over IP

I ran across a frustrating bug when using a GefenPro as an external DVI Matrix Router with an x20 and an M2C-50 console.  

When going idle for an unknown period of time(more than 1 minute, less than 10), a bug occurs. Recalling Command keys works fine, but function keys cause the console to hang and then loses connectivity to the spyder. It then re-establishes a connection and fires all the buffered function keys sequentially.

When the console crashes it registers a Remote error in the Alert Viewer: "Telnet Socket Timed Out Waiting for Response".

Using the Gefen IV serial protocol in Spyder, I was able to get a stable connection.

To solve this issue use a serial connection from the Spyder frame to the Gefen Pro Router.  

IP control can 'time out' under certain circumstances intermittently.