Tag Archives: Joe Mazzone

Gathering my tweets, photos, and notes from #Picademy in Jersey City. Thank you, @Raspberry_Pi for two days of inspiring, exciting, fun, and thought-provoking professional development! #STEMed #STEAM #MakerEd

I felt incredibly fortunate to be in a room of educators on June 21-22 for two days of Picademy hosted at the Liberty Science Center in Jersey City, New Jersey! The workshops were led by Andrew Collins (Educator Training Manager at ) and Raspberry Certified Teachers from previous co-horts (Amanda HaughsChantell Mason, and ). There was a separate Picademy June 18-19 and other networking opportunities throughout the week facilitated by Dana Augustin (Educator Program Coordinator at ). Per Picademy’s website:

Picademy is the Raspberry Pi Foundation’s free face-to-face professional development programme that supports educators throughout their digital making and computing journey. This two-day training event is held at venues around the UK and North America. After completing the programme, educators join a community of passionate digital making practitioners.Interested in attending? Visit our event calendar to find a Picademy near you.

Day 1 consisted of a crash course in setting up the Raspberry Pi (HDMI to a screen, USB to keyboard and mouse, power cord, SD card) and gaining insights into a variety of attachments and HATs (GPIO boards, Sense Hat, Explorer Hat Pro, Piano Hat, Mini Black Hat Hack3r, Camera Module V2, Traffic Light add on).

Day 2 was an opportunity to break into groups and have extended time to develop a project prototype. I partnered with Cathy Knives Chau and Lauren Berrios, and we created PiPix, a portable RaspberryPi powered Polaroid-inspired camera that can be picked up by students at any time to take pictures of class projects or on class trips. Different filters can be applied, and photos would be uploaded to a class Twitter stream. We successfully designed a countdown timer to display on the SenseHat, enabled the SenseHat’s joystick to take the picture, and had a random filter applied to the captured image. We needed more time to have the joystick be used to choose a filter and/or allow the user to choose to capture an image or an animated GIF. We were on the verge of integrating our program with Twitter’s API (Thanks to Cathy!), but didn’t manage this in time. Cathy, Lauren, and I are hoping to gather later in the summer to complete a successful PiPix prototype!

Here’s our code so far…

# PiPix
# Using SenseHat for Geo location, four buttons for filters, countdown
# Use imestamp and direc tion from joystick on SenseHat
from picamera import PiCamera
from gpiozero import Button
from sense_hat import SenseHat, ACTION_PRESSED, ACTION_HELD, ACTION_RELEASED
from time import sleep
from signal import pause
import random
import datetime
import time
#import tweepy
#import json
camera = PiCamera()
sense = SenseHat()
#with open(‘twitterauth.json’) as file:
#    secrets=json.load(file)
#auth = tweepy.OAuthHandler(secrets[‘consumer_key’], secrets[‘consumer_secret’])
#auth.set_access_token(secrets[‘access_token’], secrets[‘access_token_secret’])
#twitter = tweepy.API(auth)
randeffect = [‘colorswap’,’watercolor’,’cartoon’,’sketch’]
t = (7, 219, 252)
a = (252, 113, 7)
countdown1 = [
    t, t, t, t, a, t, t, t,
    t, t, t, a, a, t, t, t,
    t, t, t, t, a, t, t, t,
    t, t, t, t, a, t, t, t,
    t, t, t, t, a, t, t, t,
    t, t, t, t, a, t, t, t,
    t, t, t, t, a, t, t, t,
    t, t, t, a, a, a, t, t]
countdown2 = [
   t, t, t, a, a, a, t, t,
   t, t, a, t, t, t, a, t,
   t, t, t, t, t, t, a, t,
   t, t, t, t, t, a, t, t,
   t, t, t, t, a, t, t, t,
   t, t, t, a, t, t, t, t,
   t, t, a, t, t, t, t, t,
   t, t, a, a, a, a, a, t]
countdown3 = [
    t, t, a, a, a, a, t, t,
    t, t, t, t, t, t, a, t,
    t, t, t, t, t, t, a, t,
    t, t, t, a, a, a, t, t,
    t, t, t, t, t, t, a, t,
    t, t, t, t, t, t, a, t,
    t, t, t, t, t, t, a, t,
    t, t, a, a, a, a, t, t]
# Joystick
def capture(event):
    if event.action !=ACTION_RELEASED:
        camera.start_preview(alpha=192)
        sense.set_pixels(countdown3)
        sleep(.5)
        sense.set_pixels(countdown2)
        sleep(.5)
        sense.set_pixels(countdown1)
        sleep(.5)
        date = datetime.datetime.now().strftime(“%m_%d_%Y_%H_%M_%S”)
        camera.image_effect = random.choice (randeffect)
        camera.capture(“/home/pi/joy_image{0}.jpg”.format(date))
        camera.stop_preview()
sense.stick.direction_any = capture
#for i in range(4):
#        camera.image_effect = random.choice(randeffect)
#        camera.capture(“/home/pi/PiPix{0}.jpg”.format(i))

Chantell captured some video of our presentation and shared it via Twitter. Her tweet is pasted below:

Below, I’ve gathered my tweets from the two-day workshop:

And here are two tweets which include info about stuff I need to explore further…

Leave a comment

Filed under Uncategorized