Entries filed under English

Using nanoc for podcast feeds

Posted on 14. März 2015 Comments

Nanoc is a static site generator much like jekyll or octopress, but with a more minimalistic approach. These generators are not necessarily the most suitable choice for a podcast website, but it’s possible and you might save up on webspace and traffic when you use GitHub or Neocities.

Creating the podcast feed is basically like writing a normal Atom feed for the blog, since podcast feeds ARE indeed feeds with an enclosure tag in which the URL to the audio or video file is placed. This guide does not include the itunes tags. I might add it one day. Follow the instructions and use the documentation for the Helper Blogging. Tag your podcast episodes as kind:article.

The media files are placed in content/mp3 and content/opus, which is where the links in the feeds will point to later.

I invented the fields mp3 and opus, since these are the file formats I want to use. The values are the filenames. The header of a new episode/post would look like

---
title: 001 - Podcast Episode Title
created_at: 2015-03-14 09:00:00 +0000
kind: article
tags: [podcast,topic]
mp3: 001-podcast-episode-title.mp3
opus: 001-podcast-episode-title.opus
---

This has to be filled manually everytime, so make sure you have the exact filename, as some podcast clients won’t allow correction of the URL.

The next step is to write different feeds for the formats. For that, I’m using the new field format, which will be interpreted by the Helper class later on. For example, I called my normal feed blogfeed and the podcast feeds mp3feed and opusfeed. Create the file blogfeed.erb in the content folder and fill it with the following:

<%= atom_feed :title => 'repats podcast blog', :author_name => 'repat',
:author_uri => 'http://repat.de', :limit => 10, :format => 'blog' %>

The mp3feed.erb and opusfeed.erb are filled accordingly:

<%= atom_feed :title => 'repats podcast mp3', :author_name => 'repat',
:author_uri => 'http://repat.de', :limit => 10, :format => 'mp3' %>

<%= atom_feed :title => 'repats podcast opus', :author_name => 'repat',
:author_uri => 'http://repat.de', :limit => 10, :format => 'opus' %>

The next step is to use the Blogging locally in your nanoc installation. To do that you need to copy it from the gems folder into your lib folder. For me, that was

$ cp /var/lib/gems/1.9.1/gems/nanoc-3.7.5/lib/nanoc/helpers/blogging.rb lib/

It should be included like this in the lib/default.rb

include Nanoc3::Helpers::Blogging

Add the following attribute to the AtomFeedBuilder class

attr_accessor :format

If you don’t trust yourself to always remember the files you might want to  add this exception to the validate_feed_item function

if format.nil?
raise Nanoc::Errors::GenericTrivial.new('Cannot build Atom feed: no format(mp3,opus,blog) in params, item or site config')
end

After the # Add link comment is a good place to insert the  enclosure tag. File.size() will only work if the files are there and the exact same name. This code could probably be written a bit more safely, but I’m not a ruby developer and since I will have an mp3 file and and opus file in every post it’s not a problem this way.

# Add podcast enclosure
if format == 'mp3'
xml.link(href:"http://yourpodcast.com/mp3/" + a[:mp3],length:File.size("content/mp3/" + a[:mp3]), type:"audio/mpeg", rel:"enclosure")
elsif format == 'opus'
xml.link(href:"http://yourpodcast.com/opus/" + a[:opus],length:File.size("content/opus/" + a[:opus]), type:"audio/mpeg", rel:"enclosure")
end

To interpret the mp3 and opus attribute from earlier in the actual post, the last step is to add this line to the atom_feed function:

      builder.format            = params[:format]

You might need to install builder to let this run

$ sudo gem install builder

The only thing left to do is to edit the Rules file:

compile '/blogfeed' do
filter :erb
end
compile '/mp3feed' do
filter :erb
end
compile '/opusfeed' do
filter :erb
end
[...]
route '/blogfeed' do
'/blogfeed.xml'
end
route '/mp3feed' do
'/mp3feed.xml'
end
route '/opusfeed' do
'/opusfeed.xml'
end

 

You can find the blogging.rb and the Rules file on GitHub.

Flattr this!

Simple wget crawler for list of files

Posted on 6. Februar 2015 Comments

This script could be helpful to download a set of files from a webserver, that you don’t have (S)FTP access to. The input file consists of a list of filenames, one name each line.


#!/bin/bash
file=$1
WEBSERVER="http://webserver.tld/folder/"
while IFS= read -r line; do
FULLURL="$WEBSERVER$line"
wget -nc -R --spider $FULLURL
done < "$file"

At first, the first command line argument is saved into the variable file. Then the Webserver address is saved to the WEBSERVER variable. IFS stands for Internal Field Separator. It’s used to read line by line through the file in the while loop that ends in the last line. Inside of the loop, the read line is concatenated with the webserver address into FULLURL. Then, wget is used with the parameters -nc for checking if the file is not already present in the current folder, -R for downloading and –spider for checking the existence on the webserver.

You can find the script on GitHub.

Flattr this!

Collect currency exchange rates in a MySQL database with PHP and fixer.io API

Posted on 4. Februar 2015 Comments

If you work in a company that buys and sells goods in many different currencies, it might be a good idea, to use the latest exchange rates. Also, it might be useful, to store old exchange rates to clarify/verify old business decisions. If once a day is enough for you, fixer.io offers a free simple Rest API. Because most of the code at my work is written in PHP but I usually use the request in JavaScript and Python, I was looking for something similar and found the requests library by rmccue on GitHub. It’s easily installed by Composer (the PHP counterpart to npm or pip):

$ curl -sS https://getcomposer.org/installer | php

Now open a texteditor and write the following:

{
    "require": {
        "rmccue/requests": ">=1.0"
    }
}

Save the file as composer.json

php composer.phar install

Now there is a folder called vendor, and in it you will find the rmccue/requests folder. The API is pretty straight forward, as explained on the front page

include('vendor/rmccue/requests/library/Requests.php');
Requests::register_autoloader();
$BASE = 'EUR';
$request = Requests::get('https://api.fixer.io/latest?base=' . $BASE, array('Accept' => 'application/json'));
if ($request->status_code == 200) {
$response = json_decode($request->body);
$GBP = $response->rates->GBP;
$CAD = $response->rates->CAD;
$USD = $response->rates->USD;
$NOK = $response->rates->NOK;
$CNY = $response->rates->CNY;
$rBASE = mysql_real_escape_string($response->base);
$date = mysql_real_escape_string($response->date);
$currencies = mysql_real_escape_string("1.0, $USD, $GBP, $NOK, $CNY, $CAD");
$qry = "INSERT INTO `EXCHANGE-RATES-FIXERIO`(date, base, eur, usd, gbp, nok, cny, cad) VALUES ('$date', '$rBASE', $currencies);";
$insert = mysql_query($qry, $MYSQLCONNECTION) or print mysql_error();
}

So the Request file from the library is included. Also, I defined the database connection earlier, there’s lot’s of documentation for that. Because we are from Europe, I chose Euro (EUR) as the base currency. Apart from the get() method, you need nothing else, to send a request. If the request returns an OK(200), the response is read and saved into different variables, e.g. for British Pounds, US Dollar, Canadian Dollar, Chinese Renminbi and Norwegian Krone. Just to make sure we have the right base, it’s also parsed. The date can be used as a String, even though the MySQL Table might use a Date(see below). The responses are then escapes special string, just in case something goes wrong on at fixer.io. It is then inserted into the table.

The table could look like this:

CREATE TABLE `EXCHANGE-RATES-FIXERIO` (
`date` date NOT NULL,
`base` varchar(3) NOT NULL,
`eur` double NOT NULL,
`usd` double NOT NULL,
`gbp` double NOT NULL,
`nok` double NOT NULL,
`cny` double NOT NULL,
`cad` double NOT NULL
)

 

You can also find this code on GitHub.

Flattr this!

Morsecode As A Service: node.js app with restify and Heroku

Posted on 22. Januar 2015 Comments

I wanted to play around with node.js and REST APIs. Heroku is widely used for deploying node.js app, last but not least because they give you one free instance to test your code and the possibility to use your own domain name (via CNAME).

Until now the code is rather trivial. This for example is the encode function (plaintext->morsecode). It uses the npm module morse to encode the given string in the request parameters and returns it with the plaintext in an array. The requests are handled by restify.
function encode(req, res, next) {
var answer = {}
answer.plaintext = req.params.string.toUpperString();
answer.morsecode = morse.encode(req.params.string);
res.send(answer);
next();
}

This function gets called later here.
server.get('/encode/:string', app.encode);

This starts the server on the port from the Heroku instance.
var port = process.env.PORT || 8080;
server.listen(port, function() {
console.log('%s listening at %s', server.name, server.url);
});

The decode function is equivalent, except that I test if there are only “.”, “-” and white spaces in the request.

I wrote the documentation with the automatic page generator from GitHub Pages in the repository and so with the following code the user is redirected there when entering the root “/”.

 

function redirectToDocumentation(req,res,next) {
res.send(302, null, {
Location: API_DOKU
});
next();
}

It’s reachable under morsecode-api.de or morsecodeapi.herokuapp.com.

Flattr this!

Using a Perl script because refactoring of a project with Android tools didn’t work

Posted on 20. Dezember 2014 Comments

I wanted to rename my project but I guess since it has a lot of dependencies that caused an error somewhere and I got the error message:

Refactoring
A fatal error occurred while performing the refactoring.
An unexpected exception occurred while creating a change object. See the error log for more details.

So I just the normal refactoring feature of Eclipse which not surprisingly also caused an error. After editing the AndroidManifest package entry, the import of the resources in the sources files didn’t work. It still said

import com.example.oldpackage.R

Only a couple of resource files needed manual editing but the Java files were a problem. What did the trick for me was this one-liner

perl -pi -w -e 's/import com.example.oldpackage.R/import com.example.newpackage.R/g;' `grep -r -l "import com.example.oldpackage.R"`

I realise this is rather quick&dirty (do a backup ;) ) but it did work for this project. A short explanation:

perl

  • -pi puts the code in a loop (like -n, but sed-style)
  • -w gives you warnings
  • -e is one line of programm and since -pi
  • /g global, for all lines in the file

grep

  • -r recursive
  • -l give out files that matches the following search string

Flattr this!

Automize selling at LaRedoute #2: Parse order files

Posted on 16. Dezember 2014 Comments

This blog post is part of the series Automize selling at LaRedoute.

  • Part 1: Get new orders
  • Part 2: Parse order files
  • Part 3: Upload response files
  • Part 4: update quantity and price feed

In part 1 the files containing orders are downloaded into a folder called OrdersFromLaRedoute. The next script is going to go through that folder, parse the file and insert it into a table.

These are the values the table must have because we’re just going to insert everything that’s in the CSV files.


$dbValues = array('MarketplaceID', 'OrderID', 'StorefrontOrderID', 'OrderDate', 'BuyerEmailAddress', 'BuyerName', 'BuyerPhoneNumber', 'OrderItemCode', 'ItemStatus', 'SKU', 'Title', 'Quantity', 'ItemPrice', 'ItemTax', 'ShippingCharge', 'ShippingTax','ItemFee', 'Currency', 'ShippingOption', 'PaymentInfo', 'ShippingAddressName', 'ShippingAddressFieldOne', 'ShippingAddressFieldTwo', 'ShippingAddressFieldThree', 'ShippingCity', 'ShippingStateOrRegion', 'ShippingPostalCode', 'ShippingCountryCode', 'ShippingPhoneNumber', 'BillingAddressName', 'BillingAddressFieldOne', 'BillingAddressFieldTwo', 'BillingAddressFieldThree', 'BillingCity', 'BillingStateOrRegion', 'BillingPostalCode', 'BillingCountryCode', 'BillingPhoneNumber');

 

Therefore I created the table more or less like this:

CREATE TABLE `ORDERS-HISTORY` (
`MarketplaceID` int(11) DEFAULT NULL,
`OrderID` varchar(50) DEFAULT NULL,
`StorefrontOrderID` varchar(50) DEFAULT NULL,
`OrderDate` varchar(50) DEFAULT NULL,
`BuyerEmailAddress` varchar(50) DEFAULT NULL,
`BuyerName` varchar(50) DEFAULT NULL,
`BuyerPhoneNumber` varchar(50) DEFAULT NULL,
`OrderItemCode` varchar(50) NOT NULL DEFAULT '',
`ItemStatus` varchar(10) NOT NULL DEFAULT '',
`SKU` int(11) DEFAULT NULL,
`Title` varchar(50) DEFAULT NULL,
`Quantity` int(11) DEFAULT NULL,
`ItemPrice` decimal(8,2) DEFAULT NULL,
`ItemTax` decimal(8,2) DEFAULT NULL,
`ShippingCharge` decimal(8,2) DEFAULT NULL,
`ShippingTax` decimal(8,2) DEFAULT NULL,
`ItemFee` decimal(8,2) DEFAULT NULL,
`Currency` varchar(3) DEFAULT NULL,
`ShippingOption` varchar(10) DEFAULT NULL,
`PaymentInfo` varchar(50) DEFAULT NULL,
`ShippingAddressName` varchar(70) DEFAULT NULL,
`ShippingAddressFieldOne` varchar(70) DEFAULT NULL,
`ShippingAddressFieldTwo` varchar(70) DEFAULT NULL,
`ShippingAddressFieldThree` varchar(70) DEFAULT NULL,
`ShippingCity` varchar(70) DEFAULT NULL,
`ShippingStateOrRegion` varchar(70) DEFAULT NULL,
`ShippingPostalCode` int(11) DEFAULT NULL,
`ShippingCountryCode` varchar(3) DEFAULT NULL,
`ShippingPhoneNumber` varchar(50) DEFAULT NULL,
`BillingAddressName` varchar(70) DEFAULT NULL,
`BillingAddressFieldOne` varchar(70) DEFAULT NULL,
`BillingAddressFieldTwo` varchar(70) DEFAULT NULL,
`BillingAddressFieldThree` varchar(70) DEFAULT NULL,
`BillingCity` varchar(70) DEFAULT NULL,
`BillingStateOrRegion` varchar(70) DEFAULT NULL,
`BillingPostalCode` int(11) DEFAULT NULL,
`BillingCountryCode` varchar(3) DEFAULT NULL,
`BillingPhoneNumber` varchar(50) DEFAULT NULL,
PRIMARY KEY (`OrderID`,`OrderItemCode`,`ItemStatus`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 

As you can see, the primary key consists of OrderID, OrderItemCode and ItemStatus. An OrderID is the unique identifier of one order, consisting of possibly many but at least one OrderItemCode. An OrderItemCode is representing an SKU + Quantity. For newly created orders, the ItemStatus will appear as “Created”. Once an item is accepted, LaRedoute will put a file into the ToSupplier folder with exactly the same OrderID, OrderItemCode but the ItemStatus “ToShip”. This will be important in Step 3.

This will give you an array of files downloaded in part 1:


$dir = "OrdersFromLaRedoute";
$dirAsArray = scandir("OrdersFromLaRedoute");

 

This loop takes each file (“.” and “..” are there because it’s a directory listing) and opens the file. If the current line includes the word “BuyerName” this is used as in indication that the current line is the header. The loop will then continue with the next item. Otherwise the $csvLine variable can be inserted into a database table. Because this script could run as e.g. a cron job, moving the files is necessary, in this case to an archive folder.


foreach($dirAsArray as $file) {
if (!($file === "." || $file === "..")) {
if (($handle = fopen($dir . "/" . $file, "r")) !== FALSE) {
while (($csvLine = fgetcsv($handle, 5000, "\t")) !== FALSE) {
if (in_array("BuyerName",$csvLine)) {
continue;
}
// insert $csvLine into table
}
fclose($handle);
}
rename($dir . "/" . $file, 'OrdersFromLaRedoute/archive/' . $file);
}
}

 

Flattr this!

Automize selling at LaRedoute #1: Get new orders

Posted on 16. Dezember 2014 Comments

This blog post is part of the series Automize selling at LaRedoute.


The french marketplace LaRedoute unfortunately doesn’t have a real API, but they do have ways to automize some processes. A lot of smaller marketplaces have this concept as well. You will get credentials for an SFTP server. On this server you will find the folders ToSupplier and FromSupplier, where the “supplier” (aka you) can up- and download a range files documented by Merchantry in their blog. The processing of the uploaded files can take up to 6 hours, but is sometimes done in only a couple of minutes, so I’m going to assume the worst case of 6 hours in this post.

While programming a couple of scripts I found the following problems:

  • the server is incredibly slow sometimes (better at nights), so sometimes the connections just time out
  • sometimes the listing for the ToSupplier folder times out because there are too many files (according to support…huh?), so they have to be deleted regularly
  • not only the connection to LaRedoute but also the connection to my local MySQL server times out
  • I have to reserve a purchased item once I accepted it on LaRedoute immediately, because it could be sold elsewhere in the 6 hours LaRedoute might take to give me the shipping address

New orders can be found in the ToSupplier folder in tab seperated CSV files (but .txt ending) with the format OrdersYYYY-MM-DD-hh-mm-ss.txt.

Since PHP is the companies main language I will show a couple of scripts which automize downloading and processing those files. The code is of course simplified for better understanding. We’re using SFTP instead of FTP and I found using the phpseclib to be the most usable library.

I will propose the use of 2 Tables in the MySQL database: TEMP-FILENAMES and FILENAMES-HISTORY. Both have a the unique column filename. FILENAMES-HISTORY will contain the name of every file ever processed by the following script, TEMP-FILENAMES is a helper table that will be truncated after every run.

First, we need to establish a connection


$sftp = new Net_SFTP(SFTP_LAREDOUTE_HOST);
if (!$sftp-&gt;login(SFTP_LAREDOUTE_USER, SFTP_LAREDOUTE_PASS)) {
exit('Login Failed');
}

Then we change directory. This is a command that usually involves listing the directory changed into, but since this is not a graphical client, the real timeout might come on line below. $nlist will just be null if the listing fails, and I will assume it didn’t work if it takes more than 30 seconds.

$sftp->chdir('/ToSupplier');

$beforetime = time();
$nlist = $sftp->nlist();
$aftertime = time();
if(($aftertime-$beforetime) > 30 ) {
exit('Timeout while Listing directory');
}

The next piece of code is only executed if the listing worked. Every filename that includes the word “Order” is now inserted into the temporary table:

foreach($nlist as $filename) {
if (strpos($filename, 'Order') !== false) {
$qry = "INSERT INTO `TEMP-FILENAMES`(`filename`) VALUES ('". $filename . "')";
$insert = mysql_query($qry,MYSQLCONNECTION) or print mysql_error();
}
}

You can look at the difference between the filenames in your HISTORY table and the possibly new ones in the temporary table.

$tmpCmpFilenames = array();
$qry = "SELECT `filename` FROM `TEMP-FILENAMES` WHERE `filename` NOT IN (SELECT `filename` FROM `FILENAMES-HISTORY`)";
$select = mysql_query($qry, MYSQLCONNECTION) or print mysql_error();
while ($row = mysql_fetch_assoc($select)) {
$tmpCmpFilenames[] = $row['filename-id'];
}

Now we have all the new files in the array $tmpCmpFilenames. The correct way would be make sure the downloaded files are correct with hashes. Instead we decided to misuse the filesize, since it’s a good indicator something didn’t work properly;) The files not downloaded correctly are deleted from the array. They will appear next time the script is run.

foreach($tmpCmpFilenames as $filename) {
$remotefilesize = $sftp->size($filename);
$sftp->get($filename, 'OrdersFromLaRedoute/' . $filename);
$localfilesize = filesize('OrdersFromLaRedoute/' . $filename);
if ($remotefilesize != $localfilesize) {
unset($tmpCmpFilenames[$filename]);
}
}

We can now insert the filenames into the HISTORY table.

foreach($tmpCmpFilenames as $filename) {
$qry = "INSERT INTO `FILENAMES-HISTORY`(`filename`) VALUES ('". $filename . "')";
$insert = mysql_query($qry, MYSQLCONNECTION) or print mysql_error();
}

Last but not least, the temporary table needs to be truncated for the next run.

$truncate=mysql_query("TRUNCATE TABLE `TEMP-FILENAMES`",MYSQLCONNECTION) or print mysql_error();

The next step is described in part 2 of this series.

Flattr this!

Minimal Mensa Plan

Posted on 9. Dezember 2014 Comments

We’ve been doing a lot of website scraping for a university project latel so I decided a little app to scrape the universitys’ cantine website for the lunch menu.

minimalmensaplan-screenshot

A network connection in the main Activity is not allowed so I’m using a private class for that. Once the doInBackground method is over onPostExecute is automatically called.

Luckily the jSoup library which is used for parsing also brings a way download a website for parsing.

Document doc = Jsoup.connect("http://speiseplan.studierendenwerk-hamburg.de/de/520/2014/0/").get();

There is only one category class and it contains the date. The dish-description class contains what you will later see in the app. The size is needed for a for-loop later on.

date = doc.select(".category").text();
int maxDishes = doc.getElementsByClass("dish-description").size();

With the next piece of code all the dish descriptions are extracted. The original website contains details about the food (made with alcohol, pork or if it’s vegeterian etc). For blending this out I use a regular expression which filters out:

1 non-word character(\W = an opening bracket), then possibly multiple digits (\d) and non-word characters (\W = commas) and then another non-word character (\W = a closing bracket)

String dish = doc.getElementsByClass("dish-description").get(i).text().replaceAll("(\\W[\\d\\W]*\\W)", " ");

These dishes are then saved together with every second price (the one for students, the other one’s for employees) in a HashMap, which is then added to a list. For recognizing later, the dishes get the key “dish” and the prices the key “price”.

Once the data extraction is done, onPostExecute is automatically called. The date is set to a TextView above the ListView and a SimpleAdapter is populating the list of HashMaps into the layout

dateTextView.setText(date);
simpleAdapter = new SimpleAdapter(MainActivity.this,
dishList, R.layout.list, new String[] { "dish",
"price" }, new int[] { R.id.text1, R.id.text2 });
setListAdapter(simpleAdapter);

Since i’s called Minimal Mensa Plan, no other features (such as caching or selecting a cantine) are available. The app in the Play Store is used for scraping the cantine at the campus Berliner Tor but it might as well be used for others, just by changing the URL. It’s released under the MIT License and available at GitHub.

Flattr this!

How to disable ACK with xbee-arduino API

Posted on 23. April 2014 Comments

It might be useful to disable all the ACK packages(0x8B) for testing purposes. That’s possible with all packages that inherit from XBeeRequest, my example shows a ZBTxRequest.

txRequest.setFrameId(NO_RESPONSE_FRAME_ID);

Flattr this!

Markdown Templates for README files on GitHub

Posted on 7. März 2014 Comments

Every time I upload a new project on GitHub I find myself writing almost an identicially structured README.md file in Markdown. So I thought I could make some templates for the next times. Right now I only have one for mobile apps and websites with GitHub Pages, but there’s more to come.

Also I found that quite a lot of projects have very unstructured and ugly READMEs. It’s always nice to get a quick overview over a project, especially when they don’t have a website. So maybe someone can use this.

Flattr this!