life is a rum go guv’nor, and that’s the truth

Cache Makers Grand Opening!

Cache Makers is an after school and summer program that helps Cache Valley youth find their passion and prepare for careers in Science, Math, Engineering, and Technology (STEM) through hands on learning activities. It provides a context for STEM professionals to give back by mentoring youth. So far we have done projects involving robotics, micro controller programming and circuit design, 3D design and printing, laser cutting, e-textiles, web and mobile application development, game development, and space science.

Cache Makers has had a great summer! The new staff, led by Dallin Graham did a fantastic job setting up the new space, running summer schools, and supporting volunteer led groups. It is gratifying to see the excitement and growth of the youth.

This Saturday at 1:00pm Cache Makers is going to hold a Grand Opening / Ribbon Cutting at the maker space. This is a great opportunity to come see the space and learn about the activities we are doing and how you can get involved.

See the Cache Makers website (http://cachemakers.org) and the Cache Makers – Grand Opening Flyer for additional details.

Cache Makers is located at:
990 S Main Suite A
Logan, UT

Please share this with your friends and anyone that you think would interested. I’d love to see you at Cache Makers this Saturday at 1pm.

Additional Background

Cache Makers was founded in October of 2013 by a couple of friends who have a passion for STEM and for sharing that passion with youth. Volunteer leaders ran our program for the first year and a half in a donated space and with contributions of money, equipment, and materials from multiple sponsors: http://cachemakers.weebly.com/about-us.html Cache Makers is as a 4-H club under Utah State University Extension.

In November of 2015 Cache Makers received a 3 year grant from the Department of Workforce Services (DWS) to expand Cache Makers to reach 1,000 youth and to make special efforts to reach hispanic and female youth. With the DWS grant, we moved to a new 3,200 sf building, hired a full time program coordinator, multiple assistants, and greatly expanded our program.

The mission of Cache Makers is not only to prepare youth, but also to build a sustainable community of makers in Cache Valley. Volunteer mentors continue to play a major role in delivering the Cache Makers program and we are always looking for collaborations and people interested in mentoring.

Cache Makers!

In October of 2013, Kevin Reeve and I put our head together and started a youth maker club called Cache Makers. Since they we have learned a lot and had a lot of fun. The youth are amazing and the tech is great fun! We have received a lot of support from the community through volunteer adult mentors, donations of equipment and money, and invitations to come visit. Thanks!

Here is video that UEN recently produced about Cache Makers:


The Cache Makers flickr page also can give you an idea of what we have been up to.

Here is a story that Utah Public Radio ran on Cache Makers a while back.

In the past year we have received a lot of interest and support. Recently we received a 3 year grant from the Department of Workforce Services to expand the program to reach 1000 youth. Very exciting! As part of that we are moving to a new larger space. We are also hiring a program coordinator and multiple assistants.

5 Minute Family History

I’m an amateur family history researcher. When I was 10 yrs old, my Mom brought me to the family history library in downtown SLC. I experienced a little success and got bit by the bug. I’ve dabbled in it ever since. One of big challenges people face is how to do meaningful work in small amounts of time. That is why I was particularly excited about the FamilySearch indexing tools. I’ve seen my daughter grab a hold of that and do thousands of names.

I’ve been very impressed by new FamilySearch website and been using it to document sources, attach pictures, extend lines, collaborate with others, and eliminate duplicates. This last week I was introduced to puzilla.org and was able to use it to accelerate the work I’m doing. I played a bit with the APIs and came up with the following ideas for tools:

There are a number of specific tasks that a person does over and over again but figuring out the contexts for when you should do each is challenging for the average researcher. Moreover if you were to do them repeatedly you could do them more efficiently. To help solve this problem, I want to build task specific tools that you can sit down to and be that one specific task repeatedly for your ancestors and their descendants similar to how indexing works. That way you could tune the user interaction to make it more efficient and effective to do.

All of the tools would start from a given person and recurse breadth first through a specified number of generations through your ancestors and their descendants and help you do the following tasks:

Find and merge duplicates – Present the merge screen for people one after another people that Family Search recognize as having likely duplicates. In that screen, provide additional guidance about performing the merge (e.g. make sure to merge into the person who has ordinance work performed).

Attach sources and information – Go through all people that have fewer than x attached sources and find sources that are obvious matches (e.g. the first 3 in the list when you click the search resources link when looking at an individual) and make it real easy to preview the source, attach it to the record, and to copy information from it into the record.

Extend end of lines – Same as #3 except do it for your ends of lines.

Find missing family members – Do #3 except do it for families that have patterns likely to indicate that there are missing spouses or children.

Standardize data formats – Recognize examples of non standard place names, dates, capitalizations, and other data formats that should be standardized, suggest possible standardizations, and allow the user to choose one or skip.

Request ordinances – Given a set of criteria (e.g. need ordinance work, born more than 100 years ago, no obvious duplicates, have sufficient information needed to submit), present one after another families of information to request.

I think puzilla is a great start but I believe much more can be done. I like the compact way of displaying the ancestry and descendancy charts, I think if they were combined with the existing family search views and the task specific interfaces I listed above, that they could be used to give an overview of work that needs to be done and to help you remember where you are currently working.

Lean user experience

“There are many techniques for building an accurate customer archetype that have been developed over long years of practice in the design community. Traditional approaches such as interaction design or design thinking are enormously helpful. To me, it has always seemed ironic that many of these approaches are highly experimental and iterative, using techniques such as rapid prototyping and in-person customer observations to guide designers’ work. Yet because of the way design agencies traditionally have been compensated, all this work culminates in a monolithic deliverable to the client. All of a sudden, the rapid learning and experimentation stops; the assumption is that the designers have learned all there is to know. For startups this is an unworkable model. No amount of design can anticipate the many complexities of bringing a product to life in the real world. In fact, a new breed of designers is developing brand-new techniques under the banner of Lean User Experience. They recognize that the customer archetype is a hypothesis, not a fact.” Eric Ries, The Lean Startup, p.90

OER Recommender at RecSysTEL

I’m presenting OER Recommender at RecSysTEL today

Everytime I boot into Windows I have to install a critical security update

I’ve had a Mac for years but used it mostly for testing. I always laughed at the religiously zealous mac users. Last October I switched my day to day computer work from Windows to Mac.  My only regret is that I waited so long to switch. I turn on my Windows box about once a week now. It seems like every time I do it asks me to install a critical security update. Just another good reason for the switch.

Introducing Models for Math

Tonight I will be leading a discussion about the NLVM, eNLVM, and a new site called Models for Math.

Models for Math is partnering with providers of interactive open content for math, including the GeoGebra community, to provide free hosting of websites where they can author and deliver online lessons. Models for Math will offer free accounts for individual teachers that allow them to use NLVM and eNLVM materials and other open content. Models for Math will sell affordable subscription-based interactive online math websites for schools and districts. Revenue from subscriptions will allow Models for Math to continue to improve its online learning environment and to support the development of high quality interactive online content for mathematics. Come to the meeting or visit the Models for Math website to learn more.

The meeting will be held in Elluminate, Wednesday, September 8th 2010 at 6:30pm Pacific / 9:30pm Eastern. To attend the meeting:

  1. Follow this link: http://tinyurl.com/math20event
  2. Click “OK” and “Accept” several times as your browser installs the software. When you see Elluminate Session Log-In, enter your name and click the “Login” button.
  3. You will find yourself in a virtual room. An organizer will be there to greet you, starting about half an hour before the event.

If this is your first Elluminate event, consider coming a few minutes earlier to check out the technology. The room opens half an hour before the event.

The discussion is part of the Math 2.0 Interest Group weekly online event series. The Math 2.0 Interest Group is a creation of Maria Droujkova. The goals of the Math 2.0 Interest Group meetings are to “share resources, to collaborate on our projects, and to save mathematics from its current obscurity in social media. From the comfort of your browser, join live conversations and debates with authors, community leaders, and innovators in mathematics education.”

Getting started with Hadoop on Amazon’s elastic mapreduce

After playing with Hadoop a bit in the past, I’m now trying out some things on Amazon’s Elastic MapReduce.

I signed up for a new AWS account and ran their sample LogAnalyzer Job Flow using the AWS console. That was easy enough. Next I attempted to run the same sample from the command line using the Amazon Elastic MapReduce Ruby Client.

Note: The Ruby Client README turns out to be very helpful.

Next I downloaded the source and looked at. Seems simple enough. I notice that this sample uses a library called Cascading, which appears to be a way to simplify common job flow tasks.

After adding the elastic-mapreduce app to my path and setting up my credentials file, I ran:

elastic-mapreduce –create –jar  s3n://elasticmapreduce/samples/cloudfront/logprocessor.jar –args  “-input,s3n://elasticmapreduce/samples/cloudfront/input,-output,s3n://folksemantic.com/cloudfront/log-reports,-start,any,-end,2010-09-07-21,-timeBucket,300,-overallVolumeReport”

It produced:

INFO Exception Retriable invalid response returned from RunJobFlow: {”Error”=>{”Details”=>#<SocketError: getaddrinfo: nodename nor servname provided, or not known>, “Code”=>”InternalFailure”, “Type”=>”Sender”}} while calling RunJobFlow on Amazon::Coral::ElasticMapReduceClient, retrying in 3.0 seconds.

After some poking around, I realized that I specified “west-1″ as my region when it should have been “us-west-1″. This resulted in the client trying to contact a non-existent server I’m guessing.

So now, my jobs started, but failed immediately. I logged into the AWS console and clicked on one of the failed job flows to see the reason for the failure (Last State Change Reason):

The given SSH key name was invalid

Googling found: http://developer.amazonwebservices.com/connect/message.jspa?messageID=166768

Which at first confused me, then I went ahead and followed the link (while logged in) and did what it said to. (Amazing how that works sometimes :-) ) It prompted me to create a new key and to assign it a name.

After I had generated the key and put its name in the credentials.json, things worked like a charm. It turns out that if you run a job from scratch, it has to fire up an EC2 instance in order to run the job, and that can take a while. To avoid that start up time, you can run:

elastic-mapreduce –create –alive –log-uri s3://my-example-bucket/logs

As mentioned in the README.TXT

My next steps are to:

  1. Modify the job flow and run that job flow.
  2. Run the job flow locally.
  3. Debug the MapReduce portion of the job flow.

Solving aggregation problems

In Folksemantic, we run into the following problems:

  • Duplicate entries. Search and recommendation results that list multiple entries for the same resource.
  • Catalog pages. Search and recommendation results that link to catalog pages for resources (people would rather go directly to the resource, but the metadata providers want people to go to their catalog entry for the resource).
  • Dead links. Results that link to resources that no longer exist.
  • Urls without metadata. When someone shares a resource or inserts the recommender widget in a page for which we don’t have metadata, we need to be able to generate metadata.

Duplicate entries show up because:

  • Two feeds specify entries with the same permalink.
  • The same feed gets added twice (maybe different formats for the same feed, eg. RSS, Atom)
  • Multiple catalogs provide metadata for the same resource.

Dealing With Duplicate Feeds

Problem: In folksemantic a user can enter the url of their blog and we will detect the feeds from the page and add them. We use the feeds to generate personal recommendations. The problem is, a blog typically has 3 or more feeds all of which contain the same content, just provided in different formats (e.g. RSS, Atom etc). So we really don’t want all of the feeds to be generated.

Solution 1: One approach to solving this is to try to detect the duplicate feed the first time we harvest it, don’t add its entries to the index, and then flag the feed as “duplicate” so that we don’t harvest it again. Store in the feed the id of the feed it duplicates. One potential problem with this is that if someone registers a feed that has just the entries tagged a certain way (e.g. all of the entries tagged apple on the gizmodo feed), then if the main feed is already registered, all of the entries on the filtered feeds duplicate the entries in the main feed, so the entries are duplicate, but the feed is not. If we want to use the feed as a basis for making recommendations to the user, we don’t want to use the main feed.

Solution 2: Another approach to the problem is to just add the feed, and harvest it, but then flag the entries as duplicates. Our thought about doing this is to store in each entry a list all of the feeds that the entry belongs to. We need to verify that this won’t slow down our Lucene queries.

It seems that Solution 2 may be best and make it up to the app to avoid adding duplicate feeds (like the 4 feeds for the same blog that Folksemantic does).

Dealing with Catalog Entries

A number of NSDL and other projects such as OER Commons  have created large catalogs of online resources. Sometimes their metadata is harvested directly from the resource websites. Sometimes they enhance that metadata with new information. Sometimes they create metadata for resources that don’t provide their own metadata. The catalog websites often provide services such as rating, discussion, and other valuable services and so they want people to come to their websites and use them. While, these services are nice, when people are searching for resources, they likely want to look at the resource first and make their own judgement if that is possible, and then read more about it if they are interested. I think this is because the cost of looking at an online resource is minimal (as compared to buying something or attending a course, for example). So the catalog issue leads to two problems:

Problem: When people see search results, they likely want to go directly to the resource instead of to a catalog page.

Solution: When a catalog page is the only entry for a resource, that entry is flagged “primary”. As soon as we create an entry that goes directly to the resource, we flag that new entry primary, and the catalog entry as not primary; we also store the id of the catalog entry in the list of duplicate entries that we store in the new entry. When searching, by default return only primary entries unless the application explicitly requests all entries. Return a flag indicating that an entry has catalog entries. Provide an API for requesting catalog entries for a specific entry.

Problem: In most cases, catalog metadata does not provide the url of the resource it is cataloging.

Solution: Initially flag the entry as “primary” so it will show up in search results. Later, asynchronously crawl the catalog pages to find the url of the catalogued resource. Once the direct url is known, create a new entry for the resource and store the id of the catalog entry in the list of “related entries” that we store for the new entry. Flag the catalog entry as not primary and the new entry as primary. Copy the metadata from the catalog entry into the new entry. Use the resource domain as the key for the feed to add the new entry to. If the feed does not already exist, create one for it.

Problem: If there are multiple entries (catalog etc for a resource), which metadata should we use to calculate the recommendation for the resource?

Solution: Options might be: (a) the metadata provided by the resource, (b) metadata generated by a crawl of the resource – I think this is bad because frequently metadata is more descriptive than the page itself, (c) the first catalog entry found for the resource, (d) the largest set of metadata for the resource. My thought it to always use the largest set of metadata for the resource unless there is no catalog entries (like in the case of where we crawl a website), in which case we must use the metadata generated by the crawl. In order to facilitate this approach, we: (1) for entries, we store whether or not the metadata came from that resource itself or not, (2) whenever we detect a new catalog entry for a resource that already has an entry, we look to see if the metadata in the existing entry was copied from a catalog entry; if it was, compare the size of the metadata from the two entries and update the metadata with the new catalog entry metadata if it is larger. For the purpose of calculating recommendations it might make sense to use all of the metadata from all of the sources.

Problem: When a website requests recommendations for a url, normally we want to return non-catalog entries, but when a catalog requests recommendations for one of its urls, they likely want their own catalog entries back if they exist.

Solution: When generating recommendations, for recommended entries that have catalog entries, check those and recommend those catalog entries instead.

Detecting and Handling Feed Entry Deletions

Problem: OAI has a way to tell you that an entry has been deleted, but RSS does not. How can you detect when an entry has been deleted, and what should you do when it is deleted?

Solution: My thought is that this is just part of what our dead link handler does. It finds entries with dead links and flags them deleted or actually deletes them. When we re-index we remove items from the index that have been flagged deleted.

Dealing with Dead Links

Problem: Many times the resources in our indexes get taken down or moved without notification (the source of the metadata doesn’t get updated or it doesn’t get updated for a while). What should we do in that situation?

Solution: We will write a bot that will flag entries dead. Once entries are dead they won’t show up in search or recommendation results. Should they still be used as the basis for recommendations? Probably not. Maybe we create another process that looks for the new location of the dead entries?

Generating Metadata for a URL

Problem: When someone adds an entry but doesn’t provide metadata, we need to be able to generate metadata for the entry. We also need to know which feed to put it into.

Solution: The application should give us a feed id, or a display url for the feed along with the entry URL. If it does not send a feed ID, we will look for feed using the host portion of the entry permalink. If one does not exist, it will create one and specify the host the display url for the feed, that way future entries for that feed will always go into that feed.

Generating Recommendations for Web Pages We Haven’t Indexed Yet

This is similar to the previous issue, we want people to be able to add the OER Recommender widget to their pages and have them just start working, removing the requirement that they add their resources to our index before we can provide recommendations. We can analyze and provide recommendations in real time, but that tends to bury our server if it gets a bunch of requests for real time recommendations  all at once.

Problem: Provide recommendations for URLs that haven’t been indexed yet.

Solution: When the recommendation is requested, add an entry for the URL, flagging it needing to be scraped. Flag the feed as being not-recommendable. If we don’t have a domain feed for the URL, add a domain feed for the entry and specify it as the feed for the entry. Queue the feed for approval by site admins.

This brings up the issue of being able to narrow the scope of the space into which recommendations are made. Depending on the context, we want to consider different sets of items to recommend. For example, in folksemantic, for personal recommendations we let users add feeds they produce, but we don’t necessarily want to include their stuff in recommendations that we make to other people.

Problem: Narrow the scope of the space that we recommend items from.

Solution: Define recommendation tasks by specifying the aggregation of feeds that we are recommending from and the aggregation of feeds that we are recommending into. Store those ids in the recommendation table.

Configuring Apache and Tomcat to serve my java web application through port 80

Default Tomcat installations run on port 8080 so you get urls like:

http://mydomain.com:8080/lms/index.jsp

Some firewalls block port 8080 so I wanted my site to be available on port 80 so that it uses urls like:

http://mydomain.com/lms/index.jsp.

One option was to modify the Tomcat configuration to listen on port 80. However, I already have Apache installed and listening on port 80 (to serve other content) so I couldn’t that. Instead I configured Apache to route requests for my web application to Tomcat. I’ve been through this process a number of times before, and it never seems to go smoothly, so I document it here. I am running Apache 2.2 and Tomcat 6 on 32 bit Ubuntu Linux.

Overview

The steps are:

  1. Download the Apache jk connector module (mod_jk.so).
  2. Create Apache module configuration files for the jk connector (jk.load and jk.conf) and enable the module.
  3. Create a worker.properties file to configure the Tomcat worker for the connector.
  4. Define an AJP connector in your Tomcat configuration file (server.xml)
  5. Assign urls to Tomcat in your Apache virtual hosts file.

Download the Apache jk connector module (mod_jk.so)

Apache uses the jk connector module to talk to Tomcat. I downloaded it from a subdirectory of http://www.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/. I wasn’t sure which OS I was running and whether or not I was running a 32 bit version (i586 directory) or a 64 bit version (x86_64). To find this out, I ran:

file /usr/bin/file
/usr/bin/file: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, stripped
So I downloaded: http://www.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/linux/jk-1.2.28/i586/mod_jk-1.2.28-httpd-2.2.X.so

I chose that version because I am running Apache 2.2. There are different versions for different versions of Apache.

I put the file in my Apache modules directory (/usr/lib/apache2/modules/) and renamed it to mod_jk.so.

Create Apache module configuration files for the jk connector (jk.load and jk.conf) and enable the module

In order to get Apache to load and configure the jk connector module, I created jk.load and jk.conf files (in /etc/apache2/mods-available/) and then enabled them. jk.load just tells Apache where to find the module:

LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so

jk.conf ties everything together by configuring the jk connector module:

# Where to find workers.properties
# Update this path to match your conf directory location (put workers.properties next to httpd.conf)
JkWorkersFile /etc/apache2/workers.properties

# Where to put jk shared memory
# Update this path to match your local state directory or logs directory
JkShmFile     /var/log/apache2/mod_jk.shm

# Where to put jk logs
# Update this path to match your logs directory location (put mod_jk.log next to access_log)
JkLogFile     /var/log/apache2/mod_jk.log

# Set the jk log level [debug/error/info]
JkLogLevel    info

# Select the timestamp log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "

For more information about the Apache jk connector module configuration, see the Tomcat Connector – Apache Webserver HowTo.

Initially I set the JkLogLevel to debug, so I could see any error messages, but then changed it to info once I had everything working.

After creating the files, I enabled the module using:

sudo a2enmod jk

That creates symlinks to the jk.load and jk.conf files in the mods-enabled directory where my Apache is configured to look for modules to load.

Define an AJP connector in your Tomcat configuration file (server.xml)

AJP is an efficient protocol that Apache and Tomcat can be configured to use to talk to each other. I set up an AJP connector in my Tomcat configuration file (/etc/tomcat6/server.xml). The default configuration file has the connector defined but commented out, so I uncommented it:

<!– Define an AJP 1.3 Connector on port 8009 –>
<Connector port=”8009″ protocol=”AJP/1.3″ redirectPort=”8443″ />
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

For details see the Tomcat AJP Connector documentation.

Create a worker.properties file to configure the Tomcat ajp worker for the connector

“A Tomcat worker is a Tomcat instance that is waiting to execute servlets or any other content on behalf of some web server”.

Note: this quote from the documentation is a bit curious, because, nowhere in the Tomcat configuration files do I tell Tomcat about the worker. I think that the worker is actually a process that the jk connector spawns.

I configured a worker to listen to Apache requests by creating a worker.properties file in the same directory as the Apache configuration file (/etc/apache2/workers.properties).

# Define 1 real worker using ajp13
worker.list=worker1

# Set properties for worker1 (ajp13)
worker.worker1.type=ajp13
worker.worker1.host=localhost
worker.worker1.port=8009

The jk connector knows how to talk to this worker, because the file name is specified in the Apache jk connector configuration file (/etc/apache2/mods-available/jk.conf). For more information see the Tomcat Connector Quick Start or the Tomcat Connector Reference Guide.

Assign urls to Tomcat in your Apache virtual hosts file

After I configured a Tomcat worker to listen to AJP requests and configured Apache to use the jk connector module to talk to that worker, the last thing that was needed was to configure my web site’s virtual host (/etc/apache2/sites-available/default) to route urls to Tomcat:

<VirtualHost *:80>
...
        JkMount /lms/* worker1
...
</VirtualHost>

Note that worker1 is the name I gave to the worker I set up in the workers.properties file. Note that by using a * mask, I routed all requests (including static files) through Tomcat. Alternatively I could have configured only jsp requests to be routed to Tomcat, using:

<VirtualHost *:80>
...
        JkMount /lms/*.jsp worker1
...
</VirtualHost>
I would then have needed added Directory configurations to the virtual host telling Apache where to serve the static files from.

Restart Tomcat and Apache

Of course, after I had done all of this, I had to restart Tomcat and Apache:
sudo /etc/init.d/tomcat6 restart
sudo /etc/init.d/apache2 restart

Lifecycle

To the best of my understanding, the relevant lifecycle is:

  1. When Tomcat starts up, it begins listening for AJP requests on port 8009 (because the connector is defined in /etc/tomcat6/server.xml).
  2. When Apache starts up, it loads the jk connector module (because it is defined in /etc/apache2/mods-enabled/jk.load).
  3. When Apache loads the jk connector, its configuration file (/etc/apache2/mods-enabled/jk.conf) tells it to send requests to the specified Tomcat worker and to use shared memory to do that.
  4. It is not clear to me whether or not the Tomcat worker gets spawned when Apache starts up or on each request. I don’t see how it could get spawned when Tomcat starts up since Tomcat has no way of knowing about it.
  5. Apache receives a request for a url that is mapped to Tomcat (in the virtual host file – /etc/apache2/sites-enabled/default).
  6. Apache uses the jk connector module (mod_jk.so) to generate a request to send to Tomcat via a Tomcat worker.
  7. The Tomcat worker communicates with Tomcat using the protocol (AJP) and port (8009) defined in the workers configuration file (/etc/apache2/workers.properties).
  8. Tomcat processes the request and returns the response back through the worker and connector to Apache which returns it to the client.

Kind of complicated, huh? And of course, this is just my best guess.

Questions and problems

The questions / problems I ran into this time I went through this process were:
  • Not knowing which version of the JK connector to download
  • Forgetting I needed to configure the AJP connector in the Tomcat configuration file
  • Initially I routed only requests for JSP pages to Tomcat and so my stylesheets and images did not show up