Entries tagged xml

Using nanoc for podcast feeds

Posted on 23. Juli 2015 Comments

Nanoc is a static site generator much like jekyll or octopress, but with a more minimalistic approach. These generators are not necessarily the most suitable choice for a podcast website, but it’s possible and you might save up on webspace and traffic when you use GitHub or Neocities.

Creating the podcast feed is basically like writing a normal Atom feed for the blog, since podcast feeds ARE indeed feeds with an enclosure tag in which the URL to the audio or video file is placed. This guide does not include the itunes tags. I might add it one day. Follow the instructions and use the documentation for the Helper Blogging. Tag your podcast episodes as kind:article.

The media files are placed in content/mp3 and content/opus, which is where the links in the feeds will point to later.

I invented the fields mp3 and opus, since these are the file formats I want to use. The values are the filenames. The header of a new episode/post would look like

---
title: 001 - Podcast Episode Title
created_at: 2015-03-14 09:00:00 +0000
kind: article
tags: [podcast,topic]
mp3: 001-podcast-episode-title.mp3
opus: 001-podcast-episode-title.opus
---

This has to be filled manually everytime, so make sure you have the exact filename, as some podcast clients won’t allow correction of the URL.

The next step is to write different feeds for the formats. For that, I’m using the new field format, which will be interpreted by the Helper class later on. For example, I called my normal feed blogfeed and the podcast feeds mp3feed and opusfeed. Create the file blogfeed.erb in the content folder and fill it with the following:

<%= atom_feed :title => 'repats podcast blog', :author_name => 'repat',
:author_uri => 'http://repat.de', :limit => 10, :format => 'blog' %>

The mp3feed.erb and opusfeed.erb are filled accordingly:

<%= atom_feed :title => 'repats podcast mp3', :author_name => 'repat',
:author_uri => 'http://repat.de', :limit => 10, :format => 'mp3' %>

<%= atom_feed :title => 'repats podcast opus', :author_name => 'repat',
:author_uri => 'http://repat.de', :limit => 10, :format => 'opus' %>

The next step is to use the Blogging locally in your nanoc installation. To do that you need to copy it from the gems folder into your lib folder. For me, that was

$ cp /var/lib/gems/1.9.1/gems/nanoc-3.7.5/lib/nanoc/helpers/blogging.rb lib/

It should be included like this in the lib/default.rb

include Nanoc3::Helpers::Blogging

Add the following attribute to the AtomFeedBuilder class

attr_accessor :format

If you don’t trust yourself to always remember the files you might want to  add this exception to the validate_feed_item function

if format.nil?
raise Nanoc::Errors::GenericTrivial.new('Cannot build Atom feed: no format(mp3,opus,blog) in params, item or site config')
end

After the # Add link comment is a good place to insert the  enclosure tag. File.size() will only work if the files are there and the exact same name. This code could probably be written a bit more safely, but I’m not a ruby developer and since I will have an mp3 file and and opus file in every post it’s not a problem this way.

# Add podcast enclosure
if format == 'mp3'
xml.link(href:"http://yourpodcast.com/mp3/" + a[:mp3],length:File.size("content/mp3/" + a[:mp3]), type:"audio/mpeg", rel:"enclosure")
elsif format == 'opus'
xml.link(href:"http://yourpodcast.com/opus/" + a[:opus],length:File.size("content/opus/" + a[:opus]), type:"audio/mpeg", rel:"enclosure")
end

To interpret the mp3 and opus attribute from earlier in the actual post, the last step is to add this line to the atom_feed function:

      builder.format            = params[:format]

You might need to install builder to let this run

$ sudo gem install builder

The only thing left to do is to edit the Rules file:

compile '/blogfeed' do
filter :erb
end
compile '/mp3feed' do
filter :erb
end
compile '/opusfeed' do
filter :erb
end
[...]
route '/blogfeed' do
'/blogfeed.xml'
end
route '/mp3feed' do
'/mp3feed.xml'
end
route '/opusfeed' do
'/opusfeed.xml'
end

 

You can find the blogging.rb and the Rules file on GitHub.

Skript mit cat, unzip, grep und awk für eBays XML Responses

Posted on 7. Januar 2015 Comments

Wenn man bei eBay per API z.B. ein FixedPriceItem hinzufügt bekommt man die folgende Antwort nachdem der Prozess erst Scheduled und dann InProcess ist:

Status is Completed
Downloading fixed price item responses...Done
File downloaded to /tmp/add-fixed-price-item-responses-ABC123.zip
Unzip this file to obtain the fixed price item responses.

Den folgenden Code eine Datei schrieben, dann in /usr/local/bin verschieben und mit chmod +x Ausführrechte geben.

cat `unzip -o $1 | grep inflating |awk '{print $2}'`

Mit diesem Skript bekommt man zum debuggen eine schnelle Ausgabe der XML aus der Konsole bewirken:

debugebayxml /tmp/add-fixed-price-item-responses-ABC123.zip

Automate selling at LaRedoute #1: Get new orders

Posted on 16. Dezember 2014 Comments

This blog post is part of the series Automize selling at LaRedoute.


The french marketplace LaRedoute unfortunately doesn’t have a real API, but they do have ways to automize some processes. A lot of smaller marketplaces have this concept as well. You will get credentials for an SFTP server. On this server you will find the folders ToSupplier and FromSupplier, where the „supplier“ (aka you) can up- and download a range files documented by Merchantry in their blog. The processing of the uploaded files can take up to 6 hours, but is sometimes done in only a couple of minutes, so I’m going to assume the worst case of 6 hours in this post.

While programming a couple of scripts I found the following problems:

  • the server is incredibly slow sometimes (better at nights), so sometimes the connections just time out
  • sometimes the listing for the ToSupplier folder times out because there are too many files (according to support…huh?), so they have to be deleted regularly
  • not only the connection to LaRedoute but also the connection to my local MySQL server times out
  • I have to reserve a purchased item once I accepted it on LaRedoute immediately, because it could be sold elsewhere in the 6 hours LaRedoute might take to give me the shipping address

New orders can be found in the ToSupplier folder in tab seperated CSV files (but .txt ending) with the format OrdersYYYY-MM-DD-hh-mm-ss.txt.

Since PHP is the companies main language I will show a couple of scripts which automize downloading and processing those files. The code is of course simplified for better understanding. We’re using SFTP instead of FTP and I found using the phpseclib to be the most usable library.

I will propose the use of 2 Tables in the MySQL database: TEMP-FILENAMES and FILENAMES-HISTORY. Both have a the unique column filename. FILENAMES-HISTORY will contain the name of every file ever processed by the following script, TEMP-FILENAMES is a helper table that will be truncated after every run.

First, we need to establish a connection


$sftp = new Net_SFTP(SFTP_LAREDOUTE_HOST);
if (!$sftp-&gt;login(SFTP_LAREDOUTE_USER, SFTP_LAREDOUTE_PASS)) {
exit('Login Failed');
}

Then we change directory. This is a command that usually involves listing the directory changed into, but since this is not a graphical client, the real timeout might come on line below. $nlist will just be null if the listing fails, and I will assume it didn’t work if it takes more than 30 seconds.

$sftp->chdir('/ToSupplier');

$beforetime = time();
$nlist = $sftp->nlist();
$aftertime = time();
if(($aftertime-$beforetime) > 30 ) {
exit('Timeout while Listing directory');
}

The next piece of code is only executed if the listing worked. Every filename that includes the word „Order“ is now inserted into the temporary table:

foreach($nlist as $filename) {
if (strpos($filename, 'Order') !== false) {
$qry = "INSERT INTO `TEMP-FILENAMES`(`filename`) VALUES ('". $filename . "')";
$insert = mysql_query($qry,MYSQLCONNECTION) or print mysql_error();
}
}

You can look at the difference between the filenames in your HISTORY table and the possibly new ones in the temporary table.

$tmpCmpFilenames = array();
$qry = "SELECT `filename` FROM `TEMP-FILENAMES` WHERE `filename` NOT IN (SELECT `filename` FROM `FILENAMES-HISTORY`)";
$select = mysql_query($qry, MYSQLCONNECTION) or print mysql_error();
while ($row = mysql_fetch_assoc($select)) {
$tmpCmpFilenames[] = $row['filename-id'];
}

Now we have all the new files in the array $tmpCmpFilenames. The correct way would be make sure the downloaded files are correct with hashes. Instead we decided to misuse the filesize, since it’s a good indicator something didn’t work properly;) The files not downloaded correctly are deleted from the array. They will appear next time the script is run.

foreach($tmpCmpFilenames as $filename) {
$remotefilesize = $sftp->size($filename);
$sftp->get($filename, 'OrdersFromLaRedoute/' . $filename);
$localfilesize = filesize('OrdersFromLaRedoute/' . $filename);
if ($remotefilesize != $localfilesize) {
unset($tmpCmpFilenames[$filename]);
}
}

We can now insert the filenames into the HISTORY table.

foreach($tmpCmpFilenames as $filename) {
$qry = "INSERT INTO `FILENAMES-HISTORY`(`filename`) VALUES ('". $filename . "')";
$insert = mysql_query($qry, MYSQLCONNECTION) or print mysql_error();
}

Last but not least, the temporary table needs to be truncated for the next run.

$truncate=mysql_query("TRUNCATE TABLE `TEMP-FILENAMES`",MYSQLCONNECTION) or print mysql_error();

The next step is described in part 2 of this series.

Anleitung: Produkte bei eBay über API mit PHP SDK listen – Teil 2: XML Dateien mit PHP erstellen

Posted on 10. Oktober 2014 Comments

Dieser Blog Post ist Teil der Reihe Produkte bei eBay listen.


2.1. XMLWriter

Für das Schreiben von XML in PHP wird hier die XMLWriter Klasse verwendet, sie sollte eigentlich überall vorhanden sein. Als erstes wird ein Objekt erstellt.

$writer = new XMLWriter();

Mit dem folgenden Code kann man zwischen Ausgabe im Browser/auf der Konsole und dem Schreiben in eine .xml-Datei hin- und herschalten.

if ($DEBUG) {
$writer->openURI('php://output');
} else {
$filename = 'AddFixedPriceItem.xml';
touch($filename);
$writer->openURI($filename);
}

Der XMLWriter macht allerdings keine Absätze und so würden die folgende Anweisungen alles in eine Zeile schreiben. Prinzipiell ist das natürlich erstmal nicht unbedingt ein Problem. Allerdings wird eBay die Datei ab einer bestimmten Zeilenlänge nicht mehr akzeptieren, (wahrscheinlich) da die Zeilenanzahl ein Kriterium für die maximale Größe von BulkDataExchangeRequests sind.

$writer->setIndent(true);

Bevor die Elemente geschrieben werden, wird erst einmal das Dokument mit Version und Encoding begonnen.

$writer->startDocument('1.0', 'UTF-8');

Ergebnis:

<?xml version="1.0" encoding="UTF-8"?>

Ab jetzt können beliebig Elemente geschrieben werden. Übergeordnete Elemente können mit den folgenden Befehlen geöffnet und geschlossen werden. Dabei ist beim Schließen der Name egal, es zählt die Reihenfolge. Gerade in Schleifen sollte man hier also genau hinsehen.

$writer->startElement('BulkDataExchangeRequests');
...
$writer->endElement();

Ergebnis:

<BulkDataExchangeRequest>
...
</BulkDataExchangeRequest>

Soll ein Element mit einem Wert geschrieben werden wird der folgende Befehl verwendet

$writer->writeElement('SiteID', '77');

Ergebnis

<SiteID>77</SiteID>

Nun gibt es noch den seltenen Fall, dass in einem Tag noch ein Attribut vorhanden ist. Dies wird folgendermaßen realisiert:

$writer->startElement('ShippingServiceCost');
$writer->writeAttribute('currency', 'EUR');
$writer->text('0.0');
$writer->endElement();

Ergebnis:

<ShippingServiceCost currency="EUR">0.0</ShippingServiceCost>

Zuletzt sollte das Dokument noch geschlossen und der Puffer geschrieben (entweder in die Ausgabe oder in die Datei) werden:

$writer->endDocument();
$writer->flush();

2.2. Dateien zippen

In den Beispielen, die in Teil 3 benutzt werden, wird die .xml-Datei noch komprimiert, bevor sie hochgeladen wird. Dies kann mit dem folgenden Snippet umgesetzt werden.

if (!$DEBUG) {
$gzfile = $filename . ".gz";
$fp = gzopen($gzfile, 'w9');
gzwrite($fp, file_get_contents($filename));
gzclose($fp);
}

OpenCV Examples mit Ubuntu 12.04

Posted on 6. November 2013 Comments

Um mir OpenCV erstmal ein bisschen anzuschauen habe ich diese Anleitung befolgt und erstmal meine eingebaute Webcam(/dev/video0) benutzt.

sudo apt-get install build-essential libavformat-dev ffmpeg libcv2.3 libcvaux2.3 libhighgui2.3 python-opencv opencv-doc libcv-dev libcvaux-dev libhighgui-dev

Als Beispielprogramm habe ich mir facedetection ausgesucht. Mithilfe dieser Anleitung habe ich den Befehl gefunden, um die Ausgabe zu testen:

cp -r /usr/share/doc/opencv-doc/examples .
cd examples/c
gunzip facedetect.cpp.gz
sh build_all.sh

Zum Starten benötigt man dann noch die XML Datei /usr/share/opencv/haarcascades/haarcascade_frontalface_alt.xml. Der Befehl lautet also:

$ ./facedetect --cascade="/usr/share/opencv/haarcascades/haarcascade_frontalface_alt.xml"

Damit öffnet er das Device /dev/video0. Man kann auch als weiteres Argument ein Bild mit angeben:

$ ./facedetect --cascade="/usr/share/opencv/haarcascades/haarcascade_frontalface_alt.xml" lena.jpg