@node Usage @unnumbered Usage How @strong{I} use it: @table @asis @item Get its source code @example $ git clone git://git.stargrave.org/feeder.git $ cd feeder @end example @item Compile @command{feed2mdir} utility @example $ ( cd cmd/feed2mdir ; go build ) @end example @item Create feeds state directories You can create feeds subdirectories under @file{feeds/} manually: @example $ mkdir -p feeds/my_first_feed/@{cur,new,tmp@} $ echo http://example.com/feed.atom > feeds/my_first_feed/url @end example or convert Newsboat @file{urls} file (containing many lines with URLs) with @command{urls2feeds.zsh} to subdirectories hierarchy: @example $ ./urls2feeds.zsh < ~/.newsboat/urls $ cat feeds/blog.stargrave.org_russian_feed.atom/url http://blog.stargrave.org/russian/feed.atom @end example @command{urls2feeds.zsh} won't touch already existing directories and will warn if some of them disappeared from @file{urls}. @item Download your feed(s) data @example $ cmd/download.sh feeds/blog.stargrave.org_russian_feed.atom $ ./feeds-download.zsh # to invoke parallel downloading of everything @end example Probably you want to change its default @env{$PROXY} value. It uses @command{curl}, that is aware of @code{If-Modified-Since} and @code{ETag} headers, compressed content encodings and HTTP redirections. If you want to see verbose output, then set @env{FEEDER_CURL_VERBOSE=1}. @item Parse your feeds @example $ cmd/parse.sh feeds/blog.stargrave.org_russian_feed.atom $ ./feeds-parse.zsh # to parse all feeds in parallel @end example @item Quick overview of the news: @example $ ./feeds-news.zsh habr.com_ru_rss_interesting: 7 habr.com_ru_rss_news: 3 lobste.rs_rss: 3 naked-science.ru_?yandex_feed=news: 1 planet.fsfe.org_atom.xml: 1 www.astronews.ru_astronews.xml: 1 www.darkside.ru_news_rss: 5 @end example @item Run Mutt @example $ ./feeds-browse.zsh @end example That will read all feeds titles and create @file{mutt.rc} sourceable configuration file with predefined helpers and @code{mailboxes} commands. Mutt will be started in mailboxes browser mode (I will skip many entries): @verbatim 1 N [ 1|101] 2021-02-17 20:41 Cryptology ePrint Archive/ 3 [ 0| 8] 2021-12-02 19:28 Thoughts/ 32 [ 0| 8] 2021-02-17 19:32 apenwarr/ 101 [ 10| 50] 2021-02-14 13:40 Блог Stargrave на русском comments/ 102 [ 0| 51] 2021-02-17 19:37 Блог Stargrave на русском/ 316 [ 0| 44] 2021-02-17 19:33 Eaten By A Grue: Infocom, Text Adventures, and Interactive Fiction/ @end verbatim ePrint has new entries since last downloading/parsing. Stargrave's blog comments have nothing new, but still ten unread entries. If we open "Eaten By A Grue" mailbox, then will see its entries: @verbatim 1 [2021-01-30 11:00] Zork Zero: The Revenge of Megaboz (0,8K) 2 [2021-06-12 11:01] Journey: The Quest Begins (0,8K) 3 [2021-04-28 11:00] Eaten By A Cruise (0,8K) [...] ---Mutt: feeds/monsterfeet.com_grue.rss [Nachr:44 60K]--- @end verbatim @item Press @code{q} to return to mailbox browser again This is made for convenience, because you will often switch your mailboxes (feeds), but @code{q} quits Mutt by default. @item Press @code{A} to mark all messages read And again this is made for convenience. It will mark both new (@strong{N}) and old-but-unread (@strong{O}) messages as read. You will see left tag-marks near each message to understand what was touched. @item Press @code{o} to open links and enclosures URLs Do it in pager mode and you message will be piped to @command{cmd/x-urlview.sh}, that will show all @code{X-URL} and @code{X-Enclosure} links. @item Index your messages @example $ ./feeds-index.sh @end example That will create @file{mu/} and @file{search/} directories and run @command{mu index} indexing, that is safely can be done incrementally after each download/parse cycle. @item Search something Press @code{} in Mutt's index and enter your mu/Xapian search query. Let's search for articles mentioning @url{https://en.wikipedia.org/wiki/Planetfall, Planetfall} during 2019-2021 period: @code{Planetfall date:2019..2021}. @command{mu} will create symbolic links in @file{search/} subdirectory to the message. Press @code{} to switch that mailbox: @verbatim 1 [2021-12-20 07:08] Missed Classic: Stationfall - When Food Dispensers Attack (The Adventurers Guild) 2 [2021-11-20 04:52] Missed Classic 102: Stationfall - Introduction (1987) (The Adventurers Guild) 3 [2021-11-19 17:54] Boffo Games (The Digital Antiquarian) 4 [2021-10-30 23:05] Missed Classic 100: The Manhole (1988) (The Adventurers Guild) 5 [2020-05-17 22:16] Round 04 Reveal (unWinnable State) 6 [2020-05-16 22:29] Round 03 Reveal (unWinnable State) 7 [2020-04-20 11:00] Planetfall (Eaten By A Grue: Infocom, Text Adventures, and Interactive Fiction) 8 [2020-04-09 11:00] Beyond Zork (Eaten By A Grue: Infocom, Text Adventures, and Interactive Fiction) -%-Mutt: =search [Nachr:8 215K]--- @end verbatim Pay attention that there is different index format, lacking unnecessary message flags display and adding name of the feed in parenthesis. @item Cleanup excess number of messages @example $ ./feeds-clear.zsh @end example That will remove all messages in all feeds @file{cur/} directory that is not first hundred of ones, ordered by @code{mtime}. Pay attention that @file{new/} directory is not touched, so you won't loose completely new and unread messages when you are on vacation and left @command{cron}-ed workers. @command{cmd/feed2mdir/feed2mdir} command by default has @option{-max-entries 100} option set. @item If you want to clean download state @example $ cmd/download-clean.sh feed/FEED @end example @anchor{Enclosures} @item Download enclosures Many feeds include links to so-called enclosures, like audio files for podcasts. While you mail is not processed by MUA, its @file{new/} messages still there, you can run enclosure downloading process, that uses @url{https://www.gnu.org/software/wget/, GNU Wget}. Specify the directory where your enclosures should be placed. Each enclosure's filename is more or less filesystem-friendly with the current timestamp in it. @example $ mkdir path/to/enclosures $ ./feeds-encs.zsh path/to/enclosures [...] traffic.libsyn.com_monsterfeet_grue_018.mp3-20220218-152822 [...] $ file path/to/enclosures/traffic.libsyn.com_monsterfeet_grue_018.mp3-20220218-152822 path/to/...: Audio file with ID3 version 2.2.0, contains:MPEG ADTS, layer III, v1, 96 kbps, 44.1 kHz, Monaural @end example @command{feeds-encs.zsh} do not parallelize jobs, because enclosure are often heavy enough to satiate your Internet link. @end table