How to create an icon button plugin for Bubble.io using Bootstrap and Font Awesome

When using Bubble I couldn’t find a decent icon button plugin so I made one using Font Awesome and Bootstrap. I thought others might find this useful so I created a quick video.

Shared - HTML Header

<link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.9.0/css/all.min.css" rel="stylesheet">

Elements - initialize() function

function(instance, context) {
	var btn = $('<button id="but" class="btn"></button>');
  instance.canvas.append(btn);
  
  function clickHandler(e) {
      instance.triggerEvent("click");
      e.preventDefault(); 
  }
  btn.on('click', clickHandler);  
}

Elements - update() function

function(instance, properties, context) {
	var btn = $(instance.canvas.children()[0]);
  btn.addClass("btn-" + properties.type);
	btn.html('<i class="' + properties.icon + '"></i> &nbsp;' + properties.text);
}
Mood

Using async Node.js with Bubble to create simple mini projects

I love the idea of using no-code tools to make mini-apps for your team (and other teams). I knocked together a quick video manifest parser using Bubble and Node.js. A developer could do this on the command line easily but I love being able to open up these type of tools to people who aren’t quite as technical.

This tutorial shows you how to create a Bubble plugin that uses server side actions.

What do we want to achieve here?

  • An app that we can pass in a url of a DASH or HLS manifest
  • The app outputs the parsed manifest to the screen

How are we going to do this?

  • Create a Bubble plugin that takes a url paramater input
  • The Bubble plugin should download the manifest from the url
  • The manifest should then be parsed using server side actions using either m3u8-parser (if HLS) or mpd-parser (if DASH).

Create a Bubble plugin

  • On Bubble go to My Plugins page and click New plugin
  • Name your plugin something catchy. I called mine manifest-parser. Super catchy.
  • Fill in your plugin general details.screencapture-bubble-io-plugin-editor-2020-09-24-14_56_16.png
  • On the Actions tab click Add a new action. Name your action in the popup. In the drop down for Action type select Server side. This is where you can start using the power of Node.js. Exciting. Screen Shot 2020-10-13 at 11.29.57.png
  • Now we need to pass a couple of parameters to our server code. These are going to be the url of the manifest and the streaming protocol (DASH or HLS). In our server code we will load the url and parse the data loaded with the respective parser (https://www.npmjs.com/package/m3u8-parser for HLS and https://www.npmjs.com/package/mpd-parser for DASH). Fill in the Fields section like this: Screen Shot 2020-10-13 at 11.33.17.png
  • By doing this we are able to view these params in our test app. I have a Parse button, an input for the url and a dropdown to select HLS or DASH in my test app. Go to the app you are testing your plugin in and attach a workflow to a button. Now on the workflow you will be able to select your plugin from the Plugins menu. Beautifully illustrated here: Screen Shot 2020-10-13 at 11.40.55.png
  • Now as part of the workflow I can pass in the values from my manifest URL input and the HLS/DASH dropdown, like so: Screen Shot 2020-10-13 at 11.45.36.png
  • Yes! Time for the fun part. Running the server side code and utilising Node.js packages. Hop back into your plugin project. In the Returned values section add a parsedManifest key to be returned (this will contain a string representation of our parsed manifest - I guess that was kind of obvious from the name) Screen Shot 2020-10-13 at 11.51.41.png
  • This is the bit that seems to trip people up - doing an asynchronous call from the run_server function. The below is the basic outline of how to do this. (you’ll notice your dependencies are automatically updated - make sure the This action uses node modules checkbox is checked).
function(properties, context) {

  const fetch = require('cross-fetch');
  var url = properties.url;
  let manifest = context.async(async callback => {
    try{
      let response = await fetch(url);
      let plainText = await response.text();
      callback(null, plainText);
    }
    catch(err){
      callback(err);
  }});
  return {parsedmanifest: manifest}
}
  • Now create a textfield in your test app. I use a custom state from the workflow to display the return value of the above: Screen Shot 2020-10-13 at 13.34.08.png
  • Hit Preview and run your app. Drop in any url and it’ll display the source as a string in the text box - magic! Now all we have to do is flesh out the server-side code a bit and we’re done. Update with the code below and try dropping in a manifest url and running the app.
function(properties, context) {

  var m3u8Parser = require('m3u8-parser');
  var mpdParser = require('mpd-parser');
  const fetch = require('cross-fetch');

  var url = properties.url;
  var isDashProtocol = properties.streamingProtocol === 'dash';
  let manifest = context.async(async callback => {
    try {
      let response = await fetch(url);
      let plainText = await response.text();
      if(isDashProtocol) {
        var parsedManifest = mpdParser.parse(plainText, url);
        callback(null, JSON.stringify(parsedManifest, null, 2));
      } else {
        var parser = new m3u8Parser.Parser();
        parser.push(plainText);
        parser.end();
        callback(null, JSON.stringify(parser.manifest, null, 2));
      }   
    }
    catch (err) {
      callback(err);
  }});

  return {parsedmanifest: manifest};
}
Mood

Forced remote - a tech leads guide to making the transition from the office to WFH

Over the course of the last month a lot of teams have gone from working in the office to working remotely. Teams have had to adapt quickly to the new situation and it’s nuances. This is my take on running a distributed team.

My current team was semi-remote - half of us were in the office, the other half spread across multiple timezones. I was based in the office so was involved in daily chit-chat about product and roadmaps - but since being remote I’ve noticed just how much of this I would’ve missed if I was remote. I’m currently dogfooding our remote culture and I’ve noticed a few places we’ve come up short.

The daily standup

Having people located across timezones meant we needed to have a standup time where everyone could attend - this was around 4pm everyday OT (office time). After reading this article by Jason Fried I proposed doing standup asynchronously over Slack - we did this for about 9 months. I was pretty happy with it - everybody got to update their days without being interrupted. The Slack channel was public so anyone else in the company who was interested could come and have a read.

Then I tried this async approach whilst being remote full-time. I had previously WFH one day a week but I hadn’t experienced being fully remote for any period of time. I found I missed interacting with the people in the office, I missed chatting and just connecting with my team mates. We’ve since implemented a standup/knowledge share/general hangout combo each day over Zoom at a time that is convenient for all - it generally lasts 15-30 minutes and in that time we let everyone know what we’ve been working and also discuss anything that comes up (we tend not to take it offline but talk it out there and then). So far the team likes it and the previously-fully-remote team members think it’s an improvement. This Zoom meeting is open to anyone in the business who is interested or who wants to discuss something with the team.

Slack etiquette

The way we use Slack hasn’t changed since we went into forced-remote mode. We try and use Slack asynchronously - we don’t expect answers right away. We have a pinned message setting expectations about the way we work: Ry slack message

I personally also turn off notifications and badges and minimise Slack whilst I am working. I also use encourage the use of Do not Disturb mode after hours (I tend to check-in in the evenings and people are able to force a notification to me if urgent).

Zoom/conference calls

Headphones with a mic is handy (rather than laptop speakers/mic) and we also mute when others are talking. I know## there is a bit of thought that everyone should not mute but get better headsets but I think with kids around due to homeschooling that might be tricky - I know it would be for me with my kids.

Managing your team

I’ve found the best way to manage a team remotely is similar to how I manage the team in house - giving trust and responsibilities whilst ensuring psychological safety for the team to discuss anything small or large. My team is awesome, they are skilled and driven. I bet yours is too. I’ve never been a bums-on-seats manager - If I can’t see you working then you aren’t working is not my style and would be hell in the current situation. I can imagine micro-managers aren’t having a great time of it currently.

LISTICLE TIME - top tools we use for remote work you might find useful

  • Slack
  • Zoom/Teams/Slack video/Google hangouts - video conferencing
  • Droplr - I use this every day for screenshots or sharing video
  • Zeplin - great for design teams to distribute designs and assets to devs
  • Decent headphones/mic
  • Jira or ticket tracking software so work is well documented and easy to pick up - this seems obvious for software teams but may be a new concept to those not in the software industry.
  • one of these for emergencies bat phone
Mood

How to use PUT or DELETE with a roUrlTransfer object in Brightscript

In any other languages using alternatives to POST and GET is a fairly simple operation. In Brightscript however using PUT and DELETE operations isn’t particularly well documented in the SDK docs.

To add PUT or DELETE we need to set the request type on our roUrlTransfer object. In a standard GET request you’d have something like this:

function httpGetRequest(url as String) as Object
 request = createObject("roUrlTransfer")
 request.setURL(url)
 response = request.getToString()
 return response
end function

But to use a DELETE or PUT request you need to set the request type as well as using the asyncPostFromString or postFromString call. So to use a PUT use something like:

function httpPutRequest(url as String, body as String) as Void
 request = createObject("roUrlTransfer")
 request.setURL(url)
 request.setRequest("PUT")
 request.postFromString(body)
end function

Or similarly a DELETE:

function httpDeleteRequest(url as String, body as String) as Void
 request = createObject("roUrlTransfer")
 request.setURL(url)
 request.setRequest("DELETE")
 request.postFromString(body)
end function
Mood

Exiting out of a Brightscript SceneGraph application

To exit a a SceneGraph application you have to complete executions of your main method. A nice easy way to do this is to observe a field on your scene and then fire a roSGNodeEvent via the port (Once you’ve read this you can download a full working app that demonstrates the below here. It’s pretty sweet).

You’ve probably got something like the following in your main app brs file.

screen = CreateObject("roSGScreen")
m.port = CreateObject("roMessagePort")
screen.setMessagePort(m.port)
scene = screen.CreateScene("mainScene")
screen.show()
scene.setFocus(true)

while(true)
  msg = wait(0, m.port)
  msgType = type(msg)

  if msgType = "roSGScreenEvent" then
    if msg.isScreenClosed() then
      return
    end if
  end if
end while

This would exit out if you clicked back on the RCU when the scene is focused as msg.isScreenClosed() would be true - but what if we wanted to close the app on another event? It’s actually pretty simple to do. The main challenge is exiting out of the while loop. A handy way is to add an observer to the scene and pass the port as the handler.

You could modify this main screen to look something like:

screen = CreateObject("roSGScreen")
m.port = CreateObject("roMessagePort")
screen.setMessagePort(m.port)
scene = screen.CreateScene("mainScene")
screen.show()
scene.observeField("exitApp", m.port)
scene.setFocus(true)

while(true)
  msg = wait(0, m.port)
  msgType = type(msg)

  if msgType = "roSGScreenEvent" then
    if msg.isScreenClosed() then
      return
    else if msgType = "roSGNodeEvent" then
      field = msg.getField()
      if field = "exitApp" then
        return
      end if
    end if
  end if
end while

By adding the observer scene.observeField("exitApp", m.port) on the scene a roSGNodeEvent msg will fire on m.port when we change the exitApp interface field. It’s a nice succinct way of handling this.

Set up your MainScene.xml so it has an observable interface boolean field called exitApp or similar:

<?xml version="1.0" encoding="utf-8" ?>

<component name="MainScene" extends="OverhangPanelSetScene" >
    <interface>
        <field id="exitApp" type="boolean" value="false" />
    </interface>

    <children>

    </children>
    <script type="text/brightscript" uri="pkg://components/MainScene.brs" />
</component>

Then you need to setup your MainScene.brs to alter the exitApp variable on an OK click:

function init() as Void
    print "ExitApp"
end function

function onKeyEvent(key as String, press as Boolean) as Boolean
    if key = "OK" then
        m.top.exitApp = true
    end if
end function

Download the source for this here.

Mood

How to calculate how much texture memory an image will use on a Roku box

Ever found your Roku app running sluggish or even crashing? If you’re building an image heavy application you may be bumping up against your texture memory limits. Since Roku introduced SceneGraph there have been massive improvements in memory handling and a visible reduction in crashes over apps that use SDK1 but have you ever wondered how to calculate how much texture memory an image will take? (Probably not you say? Well I’ll tell you how anyways).

It’s actually fairly simple and comes down to the dimensions of the image rather than any kind of compression technique.

To figure out how much texture memory an image will use in kilobytes just use the following formula (where numberOfChannels is always 4 - RGB and alpha):

(width * height * numberOfChannels) / 1024

To get megabytes just divide by 1024 again.

So if you had an image of dimensions 1280 x 720 you can calculate that this will take up:

(1280 * 720 * 4) / 1024 = 3600 kBs (Approx 3.5 mBs)
Mood

Using HTTPS on a Roku device

I was once developing a Roku application and switched from a staging environment to (what was supposed to be) an identical production environment and everything stopped working. It turned out that the only difference with the API was one was HTTP and one was HTTPS. I know, cool story, bear with me.

Generally for a HTTP request you have something like this sat in a task node (to make it asynchronous):

function httpGetRequest(url as String) as Object
	request = createObject("roUrlTransfer")
	request.setURL(url)
	response = request.getToString()
	return response
end function

This is simplified version of what you actually would have as Roku isn’t the best at handling anything other than POST/GET requests - you’d need to handle these yourself (I’ll cover this in another post).

To enable HTTPS you have to set the certificates file and then initialise it. The certificate file is either the .pem file on your web server or the common one listed below (if you don’t know try it with the file listed below, it is included on all Roku boxes). Add the following to the code we wrote above:

if left(url, 5) = "https" then
	request.setCertificatesFile("common:/certs/ca-bundle.crt")
	request.initClientCertificates()
end if

The complete request would look something like this:

function httpGetRequest(url as String) as Object
	request = createObject("roUrlTransfer")
	if left(url, 5) = "https" then
		request.setCertificatesFile("common:/certs/ca-bundle.crt")
		request.initClientCertificates()
	end if
	request.setURL(url)
	response = request.getToString()
	return response
end function
Mood

Setting up a SceneGraph application

Set up a SceneGraph application with correct directory structure and get it running on the box.

The way SceneGraph applications are set up with regards to directory structure are different than previous SDK1 applications.

Have a read of the information here

Basically you have your directory structure like this:

- components
- images
MakeFile
manifest
- source

To run your application you will need a Main function that creates a screen in a Main.brs file. Your Main.brs file should be in your source folder. Here is an example of a Main.brs file:

sub Main()
  showChannelSGScreen()
end sub

sub showChannelSGScreen()
  print "in showChannelSGScreen"
  screen = CreateObject("roSGScreen")
  m.port = CreateObject("roMessagePort")
  screen.setMessagePort(m.port)
  scene = screen.CreateScene("mainScene")
  screen.show()

  while(true)
    msg = wait(0, m.port)
    msgType = type(msg)

    if msgType = "roSGScreenEvent" then
      if msg.isScreenClosed() then
        return
      end if
    end if
  end while
end sub

As you can see above we created a scene called mainScene. This needs to be implemented at some point. In SceneGraph applications there are two different file types we use. *.brs and *.xml - we generally use the xml files for views and the *.brs files for everything else. You can embed BrightScript code into the xml files but I’ve often found it good practice to separate the two.

Next create a file called components/mainScene/MainScene.xml and write the following code in it.

<?xml version="1.0" encoding="utf-8" ?>

<component name="MainScene" extends="OverhangPanelSetScene" >
    <interface>
    </interface>

    <children>

    </children>
    <script type="text/brightscript" uri="pkg://components/mainScene/MainScene.brs" />
</component>

Notice that we have included a brs file within it (<script type="text/brightscript" uri="pkg://components/mainScene/MainScene.brs" />). This brs file would usually contain the business logic for the view - like a mediator/controller. In this example though it’s cool just to print something out - like this (create this in components/mainScene/MainScene.brs):

function init() as Void
  ? "howdy"
end function

Note how the init() method is always called on object creation - like a constructor.

Getting the app on the box

Now getting the application on to the box. To make things easier I have a MakeFile that I use (http://ry.d.pr/Fysh - mainly based on Roku’s). To run this makefile download it to the root directory of your application. Open a shell in your root and execute (where 192.168.1.XXX is the IP of your box):

export ROKU_DEV_TARGET=192.168.1.XXX

Once you have done this you can simply type:

make install

to install your app.

Have a play around and get your app on the box - add a few components to the view if you like. Also have a look at the manifest file and add some different splash screens there.

To view trace output/errors you can telnet into your box.

telnet 192.168.1.XXX 8085

On firmware previous to 7.5 you can use ports 8085 - 8089 for different outputs (main app, threads, component traces) but on firmware 7.5 all of this is amalgamated on port 8085 (which makes life a lot easier).