You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Bonface Munyoki dd80ec01f8
Update how scores are updated
3 days ago
.github Comment out instructions in issue templates 2 weeks ago
etc etc: biohackathon: Add missing configuration parameter 1 week ago
newsfeed web: server: break up long lines 1 week ago
web Update how scores are updated 3 days ago
.gitignore Gitignore compiled racket byte-code 5 months ago
LICENSE Initial commit 8 months ago Rename all occurences of "polling" with "voting" in README 1 week ago
info.rkt Add collection name 2 weeks ago
main.rkt Rename "provide" to "web" 1 week ago

What is Feed Analyser?

This aggregates content from different channels(for now only twitter and github commits are properly supported) and displays them in one place. In most hackathons, organisations and sometimes events, content spans different places e.g. Discord, slack, IRC, mailing list, issue trackers etc. This project aims to aggregate news from different places and display them in one place.

For now, the project supports twitter and github.

It also allows you to vote on the feeds. You can view a demo here

How does it work?

The program scrapes content from twitter and github and stores the data into redis queues with a score. You can then read from the redis queue.

There's also an in-built provision for voting on queues, whereby each vote updates the zscore of an individual item thereby increasing or decreasing the likelihood of it being displayed on a page.

Why are you storing things to Redis? Why not just generate pages straight from them?

We'd like to extend the program later so that it can provide a source of aggregate data to other programs. To achieve this, you'd need an intermediary data store, and for this, Redis was chosen.

How do I run this?

If you are fetching data from twitter, ensure you have Python3 and twint installed as this is used to fetch data from twitter without using twitter's restrictive API. And abide by the Twitter terms of service and usage policies when using that data!

  • Configure the feed reader

See configuration examples.

  • Start the twitter/ gh fetching daemon:

# Replace example.conf.rkt with your own conf file with the correct settings
racket newsfeed/update-feed.rkt -c etc/biohackathon.conf.rkt

The worker daemon will fetch data from twitter and github on a set time frame as set in the configuration - typically every 6 hours or so - and feed the results to redis. Note the feed-prefix that distinguishes the actual feeds.

  • Run the voting 'server'

The voting server responds to requests from a web service. It serves

# Replace the conf with your own conf file with the correct settings
racket newsfeed/voting-server.rkt -c etc/biohackathon.conf.rkt

Suggestions/ Help Wanted

  • Fetch data from Slack

  • Come up with an algorithm to sieve out noise(Machine Learning)

  • Expose content using an API

  • Come up with a sane deployment process


  • [x] Better Parsing from Twitter

  • Fetch commits

  • Add package and dependencies to Guix

  • Integrate to GN2

  • Fetch Pull Requests and merges from GitHub and display them

  • Fetch Data from Slack

  • Make a lib layer and add project to upstream to

  • Fetch Data from IRC

  • Add NLP to make more meaningful data


This tool is published under the GPLv3. See LICENSE.