Skip to content

Compile references of publications, devices, SoC and tools for technical watch. Provides CLI tool, web page, Ruby server and kind of PWA to search and get results from corpora.

License

Notifications You must be signed in to change notification settings

pylapp/Tips-tools

Repository files navigation

Opened issues MIT license Versions Still maintained

Code size

Shell Ruby HTML, CSS, JavaScript

Screenshot of the PWA on macOS showing here only some results (here web sites hyperlinks of publications and blogs) for the keyword 'iOS' GIF video of CLI tool which make assets updates, metrics computations and provides data

Tips'n'tools

Keep time and be faster with your own cache of references, tools and specifications useful for developers!

Tips'n'tools is a project which has the aim of making searches a bit faster and improving technical watch.
Indeed sometimes you need to share useful web links and cool libraries to your colleagues, but which platform to use? Your company's inner social network? Lost time if you move from your job. A public social media? Lost time if this medium is closed. Fill your web browser's bookmarks? Yeah, got stolen or reinitialized your computer and you are done.

Tips'n'tools allows you to fill a spreadsheet (.ods file) and then export its sheets to CSV files (ok it's old school). Then you can:

  • use the main Shell script (tipsntools.sh) to make queries so as to find a thing you have listed (hey, just CLI, no GUI)
  • build a little Progressive Web App (a Single Page Application) which allows you to make queries with a more user-friendly UI (but can be improved a lot)
  • update a global web page if you cannot use the web app (quite heavy if you have plenty of content to show)
  • run the server-side feed script, written in Ruby, so as to expose an HTTP API to make the queries from everywhere (Ruby servlet calling Shell scripts, quite dumb)
  • place all of this project in your web server, and make the Ruby script or the web app available for your friends or colleagues (but web-voodoo-glue might be required)

Tips'n'tools may be useful if you want to compile, in one place, plenty of references and data interesting for your projects. Never rely on social networks or corporate heavy tools, make your own cache and bring it everywhere!

You can get more details in the wiki.

The doctor

A doctor script is available to let you check if prerequisites are all filled

bash doctor.sh

The main script

Get help about commands

bash tipsntools.sh --help

Make queries using regexp

bash tipsntools.sh {--findAll | --findWeb | --findDevices | --findTools | --findSocs} yourRegex {--json}

It makes some searches in files with a regular expression as a filter. The --json flag after the regex makes the script produce JSON-based data.

Update corpora

bash tipsntools.sh --update

It builds HTML and JSON files from your CSV files, and build a global web page and the little web app (to see as a Progressive Web App or Single Page Application if you like buzzwords).

Add new elements in spreadsheets and other files

You can fill the .ods spreadsheet file with new data you want to save. You should keep the columns order. Then export the spreadsheet tabs in CSV format (UTF8 encoded) in the suitable folders. And run the command bellow to update the .html and .json files.

bash tipsntools.sh --update

Who's who

The main script (tipsntools.sh) calls core's Shell scripts (stored in utils/core folder) to play with CSV files (in contents folder).
The server-side script (datafeed.rb) calls the main script to process the queries.
The web app is defined in the utils/webapp folder, and if the web browser is not suitable, the web page (defined in utils/webpage folder) can be used instead. The web app and the web page to use are in the build folder (their assets are copied from utils). Finally, the CSV files you should export from ODS (preserving UTF8), and the generated .json and .html files are in the contents folder.

Customize the project

In most of case nothing is hard-coded, I hope. Feel free to customize the Shell scripts, the HTML assets, etc. The thing is, if you want to add a column in one of the spreadsheet's sheets, do not forget to update the dedicated Shell scripts and the HTML elements (CSS style sheets, HTML tables, etc.).

Run the web app

The web app here is a kind of "Progressive Web App" as a "Single App Application" (one page, offline, installable, with a cache, responsive, etc), BUT it remains web before all and it's a bit crapy. So because web browsers world is fu****g missy (and also coz' I enjoy native apps), it remains web browsers which do not support Service Workers, Web Workers, IndexedDB, Promises, ES6 or common and nowadays tools blablabla. Thus you should use an up-to-date web browser. And sometimes it still won't work. "Web is universal and cool" they said (U_U) Because Service Workers are used, you should reach the web app through HTTPS or a local web server (localhost).

If you saw newbie things I did, feel free to submit a pull request!

Requirements

  • Operating system which can use Shell (Bash / ZSH) and Ruby scripts (macOS, GNU/Linux, ...)
  • Up-to-date web browser compatible with IndexedDB, Web Workers, Service Workers, ES6, JS' Promises... (Firefox 58.0.2+, Chromium 64.0.+, ...)
  • Something which can deal with .ods file (Libre Office, Open Office, ...)

For macOS users, you should install the truncate command:

brew install coreutils
brew install truncate
# Maybe you should after run `brew link truncate`

Deploy it

To deploy this project for you, your colleagues, your team or whoever, here are the steps:

  • get this project (download, fork, clone, summon, teleport ...)
  • customize elements if you want (columns, styles, ...)
  • fill the ODS file
  • export each sheet as CSV file
  • run the update command to produce from the CSV files the HTML and JSON files
  • and do not forget to store in a server or a shared space the project (to reach the web app or web page, call the web API, etc.), i.e. the content of the "build" folder

Files tree

Here is the file tree for this version:

  • build the web page's and the web app's elements, updated each run, to place in a server (e.g. in /var/www)
  • contents the place where are stored the CSV files you export form the ODS, and the genereated HTML and JSON files
  • utils the place where the Shell core's scripts are, and the assets for the web page and the web app

Note

It seems some web browsers (Firefox 58 for Android and Ubuntu) have bugs with IndexedDB. So you won't use the web app with them.
You ask why Shell and Ruby are used instead of full cross-platforms languages? Power, effectiveness, and free and open-source OS. This project was, and in fact still is, a side project without too much ambitions with naive implementations. Feel free to subit pull requests to improve it.

Must-read note

Have a look on the release note to get more details.

Known issues

If you get errors like sed: RE error: illegal byte sequence, please refer to the hyperklink bellow. It seems some files (like CSV files) you produce contain special characters '�' making sed fail. https://stackoverflow.com/questions/19242275/re-error-illegal-byte-sequence-on-mac-os-x

You may also have in the web app error like "An error occured with the JSON data gotten from the feed Web API. The degraded mode is still available". By looking in the developer console, you may find there is unexpected error in the JSON data returned by the script. In this case you should have a look on the CSV files you exported ; maybe there are not well formatted and make the produced JSON bad.

In addition, if you install the web app on your computer, you may have different behaviors if you choose Brave, Chrome or Firefox :-/

More details on the issues tracker.

Some features may fail, like the --check feature which will check if URL are still available or not. In fact some commands like CURL may fail if the website does not respond. Thus in this case we may have to comment the set -euxo pipefail line in the main script to let the script check URL even if CURL fails.

Funny notice

Several years ago, I noticed the Git history was crappy and fucked up. There were some data leaks, wrong email address was used, got a lot of spam, commits links to the GitHub account's were not created because of bad pseudo... No DCO, no GPG-signing... It was a big mess. Trials to clean the history failed, that is the reason why the project was deleted and created again and all the Git history lost until 2020. Learned of that!

Licenses

All the sources (Shell, JavaScript, CSS, HTML, Ruby, etc.) are under MIT license. The ODS template and any HTML, CSS and JSON generated files are under Creative Commons With Attribution.

Support

If you want to support this project, feel free to submit issues and pull requests!

You can also feed the developer by using some of the tools bellow or other mediums 🫶

Bye Me A Coffee logo      PayPal logo      Ğ1 logo