Manuals »

KCHUNG Archive


Info about...

Technical Info



edit SideBar

everything that has ever been broadcast on KCHUNG (regular programming from the studio, special events, remote broadcasts, news programming etc) is archived and made accessible for searching and download at

archived mp3 files can be browsed by date or by show, and can be searched by keyword, artist / guest names, episode titles, location / venue etc.

all of this metadata is stored in a database hosted by Los Angeles Contemporary Archive, a partner organization of KCHUNG.

editing database information for shows

any kchung community member can edit information about KCHUNG shows by clicking the "edit" link under any file in the archive, or logging into to the LACA site, and using the user name "kchung" and the password "Kchungarchive99".

The information that can be added or edited include:

  • program title (applies to the single episode only, and doesn't effect either the url or the show (series) info)
  • artist / authors (this can be the show creator as well as any guests. some names exist in the database already, so check the drop-down menu before adding a new name)
  • description (applies to the single episode being edited)
  • keywords (search terms may already exist, use the auto-complete function to find common terms)
  • show (this is the series that the individual episode / file belongs to. use the auto-complete to find the standardized spelling of the show's title to avoid multiple entries for the same series)
  • date aired / exhibited (automatically generated from the filename, changing this effects where the file shows up when sorting by date)
  • venue (kchung by default, change for remote broadcasts)
  • file (this can be any image or text document that you want to store with the audio)

once you've logged in, you can also create or edit a description for any show / series as a whole, by visiting or clicking the "add show details" that appears at the bottom of search results returned when browsing by show in the kchung archive.

any information about guests, playlists, descriptions, keywords etc that you add help make the database more searchable / more useful... go wild.

troubleshooting archive issues

for any issues with the archive, including missing or mis-labeled shows, broken links, duplicate uploads, merging multiple search-terms / show names / artist names into one term etc, or just general help with using the archive, email [email protected]. suggestions on how to make the archive work better are also very much appreciated!

there is also the opportunity to get involved as an archive "power user", i.e., digging a little deeper into the structure of the archive, bulk-editing, and helping things to work better. if you are interested in this, send an email to [email protected] or see the archive troubleshooting page.

how the archive gets synchronized / the archive-sync script

  • local storage of archive files:

once they've been converted and named by station managers, archive files are hosted locally as mp3's in the ~/desktop/archive folder, sorted in folders according to date. this folder gets backed up to external hard drive irregularly, and very rarely does anything in this folder need to be deleted. hi-res versions of files are stored in the ~/desktop/wav+aiff file archive folder - this folder needs to be dumped to external hard drive more frequently, and older files deleted.

  • remote storage of archive files:

the archive files (as mp3's) are hosted remotely in a dreamhost "dreamobjects" account - functionally equivalent to an amazon s3 cloud storage "bucket". the structure of an s3 bucket is special - each bucket has a username and two keys for authentication, each file and directory are stored as "objects". the bucket can be browsed using dreamhost's cpanel or an application like cyberduck - i've set cyberduck up on the kchung studio computer to automatically log in with the username and security keys. cyberduck displays the bucket as if it were a typical directory structure - files inside of directories, but it's worth remembering that each object is essentially horizontally related to the others. the bucket has a cname associated with it that creates a url for each file in the format:, this is how one would access / download each file directly.

  • the kchung archive database at LACA:

the database information, displayed on the kchung website as an iframe, comes from LACA's drupal, which manages a mysql database that is stored on their server. kchung has two users created in LACA's drupal that can edit this database - "kchung" (basic user) and "kchung admin" (advanced user). this database is automatically populated every time the archive-sync script is run (see below). drupal uses a regex parser to determine the date and show title from the filename (this is why it needs to be in the format "").

  • the archive-sync script:

on the kchung studio computer, a calendar alert is set to run the archive-sync script automatically in the background every day at 4 am. "archive-sync" is an automator script - a workflow including shell scripts and python scripts. all components, in addition to the automator script, live in the ~/scripts directory. note that the archive script does not need to be run manually - the entire process runs on its own once a day

  • how the script works:

you can open the archive-sync script in automator and look at each step. first, an rsync-like python utility called "boto-rsync" does a dry run, looking for files in the ~/desktop/archive folder that are not already in the dreamobjects bucket. this produces a text document: ~/log/archive-temp-boto.txt. next, a python script parses the text file to make a new file: ~/log/archive.csv that is just a list of filenames for each mp3 (no directories, no .ds_store files) in the correct format for drupal to auto-populate the database with show title, date, and url for each file. the newly created archive.csv is then uploaded to next, LACA's auto-populate script is called using curl. at this point, LACA's drupal parses the csv to create database entries. after waiting a minute, the remote file is deleted to make sure there's no duplicate entries. meanwhile, boto-rsync is run again, this time in active mode, synching new local files with the dreamobjects bucket. finally, the script queries the LACA database for kchung files, to fill the cache and (hopefully) speed up pageloads.

  1. advanced troubleshooting
Page last modified on September 17, 2019

Edit - History - Print - Recent Changes (All) - Search