CopyQ是一款免费开源的剪贴板管理软件,支持多平台包括: Linux、Windows、Mac OS X. 适用于大量数据的复制粘贴,还具有编辑及脚本化等强大高级的功能。可存储多种文本格式以及图像,可做标签,进行拖拉操作。用户可以快速复制一整张电子表格中的每一个单元格的内容,在粘贴的时候会自动换单元格粘贴。它具有快速打开文件夹、批量打开文件夹、快速打开网络地址、小型个人数据管理等众多功能。
rqlite is simple to deploy, operating and accessing it is
very straightforward, and its clustering capabilities provide you with
fault-tolerance and high-availability. rqlite is available for Linux, macOS, and Microsoft Windows, and can be built for many target CPUs, including x86, AMD, MIPS, RISC, PowerPC, and ARM.
rqlite gives you the functionality of a rock solid, fault-tolerant, replicated relational database, but with very easy installation, deployment, and operation. With it you've got a lightweight and reliable distributed relational data store. Think etcd or Consul, but with relational data modelling also available.
You could use rqlite as part of a larger system, as a
central store for some critical relational data, without having to run
larger, more complex distributed databases.
Finally, if you're interested in understanding how distributed systems actually work, rqlite is a good example to study. Much thought has gone into its design and implementation, with clear separation between the various components, including storage, distributed consensus, and API.
How?
rqlite uses Raft
to achieve consensus across all the instances of the SQLite databases,
ensuring that every change made to the system is made to a quorum of
SQLite databases, or none at all. You can learn more about the design here.
Key features
Trivially easy to deploy, with no need to separately install SQLite.
The quickest way to get running is to download a pre-built release binary, available on the GitHub releases page. Once installed, you can start a single rqlite node like so:
rqlited -node-id 1 ~/node.1
This single node automatically becomes the leader. You can pass -h to rqlited to list all configuration options.
Homebrew
brew install rqlite
Forming a cluster
While not strictly necessary to run rqlite, running
multiple nodes means you'll have a fault-tolerant cluster. Start two
more nodes, allowing the cluster to tolerate the failure of a single
node, like so:
This demonstration shows all 3 nodes running on the
same host. In reality you probably wouldn't do this, and then you
wouldn't need to select different -http-addr and -raft-addr ports for
each rqlite node.
With just these few steps you've now got a fault-tolerant,
distributed relational database. For full details on creating and
managing real clusters, including running read-only nodes, check out this documentation.
Inserting records
Let's insert some records via the rqlite CLI,
using standard SQLite commands. Once inserted, these records will be
replicated across the cluster, in a durable and fault-tolerant manner.
$ rqlite
127.0.0.1:4001> CREATE TABLE foo (id INTEGER NOT NULL PRIMARY KEY, name TEXT)
0 row affected (0.000668 sec)
127.0.0.1:4001> .schema
+-----------------------------------------------------------------------------+
| sql |
+-----------------------------------------------------------------------------+
| CREATE TABLE foo (id INTEGER NOT NULL PRIMARY KEY, name TEXT) |
+-----------------------------------------------------------------------------+
127.0.0.1:4001> INSERT INTO foo(name) VALUES("fiona")
1 row affected (0.000080 sec)
127.0.0.1:4001> SELECT * FROM foo
+----+-------+
| id | name |
+----+-------+
| 1 | fiona |
+----+-------+
Limitations
In-memory databases are currently limited to 2GiB
(2147483648 bytes) in size. You can learn more about possible ways to
get around this limit in the documentation.
Because rqlite peforms statement-based replication certain non-deterministic functions, e.g. RANDOM(),
are rewritten by rqlite before being passed to the Raft system and
SQLite. To learn more about rqlite's support for non-deterministic
functions, check out the documentation.
This has not been extensively tested, but you can directly
read the SQLite file under any node at anytime, assuming you run in
"on-disk" mode. However there is no guarantee that the SQLite file
reflects all the changes that have taken place on the cluster unless you
are sure the host node itself has received and applied all changes.
In case it isn't obvious, rqlite does not replicate any
changes made directly to any underlying SQLite file, when run in "on
disk" mode. If you change the SQLite file directly, you may cause rqlite to fail. Only modify the database via the HTTP API.
SQLite dot-commands such as .schema or .tables are not directly supported by the API, but the rqlite CLI supports some very similar functionality. This is because those commands are features of the sqlite3 command, not SQLite itself.
from https://github.com/rqlite/rqlite
-------------------------------------------
编译:
go install github.com/rqlite/rqlite/cmd/rqlited@latest
The Facebook CTF is a platform to host Jeopardy and “King of the Hill” style Capture the Flag competitions.
How do I use FBCTF?
Organize a competition. This can be done with as few as two
participants, all the way up to several hundred. The participants can be
physically present, active online, or a combination of the two.
Follow setup instructions below to spin up platform infrastructure.
The FBCTF platform was designed with flexibility in mind,
allowing for different types of installations depending on the needs of
the end user. The FBCTF platform can be installed either in Development
Mode, or Production Mode.
The Quick Setup Guide
details the quick setup mode which provides a streamlined and
consistent build of the platform but offers less flexibility when
compared to a custom installation. If you would prefer to perform a
custom installation, please see the Development Installation Guide or Production Installation Guide.
This guide is intended to help you get the platform up and running with as little effort as possible.
Please note that this guide is to be used with Ubuntu 16.04 LTS as
the host operating system. Other Linux distributions or operating
systems are not supported by the quick setup process.
This guide details the quick setup mode which provides a streamlined
and consistent build of the platform but offers less flexibility when
compared to a custom installation. If you would prefer to perform a
custom installation, please see the Development Installation Guide or Production Installation Guide.
The FBCTF platform was designed with flexibility in mind, allowing
for different types of installations, depending on the needs of the end
user. The FBCTF platform can be installed either in Development Mode, or
Production Mode. Development is for testing and agility, and production
is for better performance and typically used for live events.
Production mode utilizes an HHVM web cache, which speeds up processing.
You will need to select your mode, production or development before proceeding.
Note that the following commands must be run before beginning your provision:
Used when directly installing to the system you are on; this is
useful when installing on bare metal, an existing VM, or a cloud-based
host. Recommended for small events.
Direct Installation
From the system you wish to install the platform, execute the following:
git clone https://github.com/facebook/fbctf
cd fbctf
source ./extra/lib.sh
quick_setup install prodorquick_setup install dev
from https://github.com/facebookarchive/fbctf/wiki/Quick-Setup-Guide
The FBCTF platform was designed with flexibility in mind, allowing
for different types of installations, depending on the needs of the end
user. The FBCTF platform can be installed either in Development Mode, or
Production Mode. Development is for testing and agility, and production
is for better performance and typically used for live events.
Production mode utilizes an HHVM web cache, which speed up processing.
Production Installation
Production is intended for live events utilizing the FBCTF platform.
Installation of the production platform can be performed either
manually, or by using Docker.
Please note that regardless of the installation method, your VM must
have at least 2GB of memory. This is required for the Composer part of
the installation.
Regardless of your installation method, ensure the date and
time is correct on your base system. This will prevent certificate
invalidation issues when downloading certain packages. Follow the below
instructions to force a time update on Ubuntu 16.04:
sudo apt-get install ntp
sudo service ntp stop
sudo ntpd -gq
sudo service ntp start
Manual (Preferred)
Ubuntu 16.04 x64 (Xenial) should first be installed as the hosting
system. This is currently the only supported operating system. Ensure
that you only install the base system without extras such as LAMP. This
will cause issues with the FBCTF installation.
Update repositories on the Ubuntu system, to ensure you are getting the latest packages:
sudo apt-get update
Install the git package which will allow you to clone the FBCTF project to your local system:
sudo apt-get install git
Clone the FBCTF project by running the following command. This will create folder called fbctf in the current directory:
git clone https://github.com/facebook/fbctf
Navigate to the fbctf directory:
cd fbctf
Run the provision script in order to install the FBCTF platform. To
perform a default installation, run the command below. However, check
the provision script section for custom installations:
./extra/provision.sh -m prod -s $PWD
The provision script will autogenerate an administrative password at
the very end. Ensure you document this password, as it will not be
provided anywhere else.
If the admin password needs to be reset, run the following commands in the fbctf directory:
After installing the FBCTF platform, access it through your web browser using the configured IP address.
Login with the credentials admin and the password generated at the
end of the provision script. Access the login screen by clicking the
Login link at the top right of the window. You will then be redirected
to the administration page. The gameboard can be accessed at the bottom
of the navigation bar located on the left side of the window.
from https://github.com/facebookarchive/fbctf/wiki/Installation-Guide,-Production
ZoneMinder is an integrated set of applications which
provide a complete surveillance solution allowing capture, analysis,
recording and monitoring of any CCTV or security cameras attached to a
Linux based machine. It is designed to run on distributions which
support the Video For Linux (V4L) interface and has been tested with
video cameras attached to BTTV cards, various USB cameras and also
supports most IP network cameras.
If a repository that hosts ZoneMinder packages is not
available for your distro, then you are encouraged to build your own
package, rather than build from source. While each distro is different
in ways that set it apart from all the others, they are often similar
enough to allow you to adapt another distro's package building
instructions to your own.
Building from Source
Historically, installing ZoneMinder onto your system
required building from source code by issuing the traditional configure,
make, make install commands. To get ZoneMinder to build, all of its
dependencies had to be determined and installed beforehand. Init and
logrotate scripts had to be manually copied into place following the
build. Optional packages such as jscalendar and Cambozola had to be
manually installed. Uninstalls could leave stale files around, which
could cause problems during an upgrade. Speaking of upgrades, when it
comes time to upgrade all these manual steps must be repeated again.
Better methods exist today that do much of this for you.
The current development team, along with other volunteers, have taken
great strides in providing the resources necessary to avoid building
from source.
Building a ZoneMinder Package
Building ZoneMinder into a package is not any harder than
building from source. As a matter of fact, if you have successfully
built ZoneMinder from source in the past, then you may find these steps
to be easier.
When building a package, it is best to do this work in a
separate environment, dedicated to development purposes. This could be
as simple as creating a virtual machine, using Docker, or using mock.
All it takes is one “Oops” to regret doing this work on your production
server.
Lastly, if you desire to build a development snapshot from
the master branch, it is recommended you first build your package using
an official release of ZoneMinder. This will help identify whether any
problems you may encounter are caused by the build process or is a new
issue in the master branch.
Please visit our ReadtheDocs site for distro specific instructions.
Package Maintainers
Many of the ZoneMinder configuration variable default
values are not configurable at build time through autotools or cmake. A
new tool called zmeditconfigdata.sh has been added to allow package maintainers to manipulate any variable stored in ConfigData.pm without patching the source.
For example, let's say I have created a new ZoneMinder
package that contains the cambozola javascript file. However, by
default cambozola support is turned off. To fix that, add this to the
packaging script:
./utils/zmeditconfigdata.sh ZM_OPT_CAMBOZOLA yes
Note that zmeditconfigdata.sh is intended to be called, from the root build folder, prior to running cmake or configure.
JDK 11 and JUnit are currently required to compile the complete sources of the
main project. Our default IDE is Eclipse.
You can launch the following classes, which are all placed in the basex-core
directory and the org.basex main package:
BaseX : console mode
BaseXServer : server instance, waiting for requests
BaseXClient : console mode, interacting with the server
BaseXGUI : graphical user interface
Moreover, try -h to list the available command line options. For example, you
can use BaseX to process XQuery expressions without entering the console.
Using Eclipse
BaseX is being developed with the Eclipse environment. Some style guidelines
are integrated in the sources of BaseX; they are being embedded as soon as you
open the project.
Running BaseX
The following steps can be performed to start BaseX with Eclipse:
Press Run → Run...
Create a new Java Application launch configuration
CodernityDB is opensource, pure python (no 3rd party
dependency), fast (really fast check Speed in documentation if you don't
believe in words), multiplatform, schema-less, NoSQL
database. It has optional support for HTTP server version
(CodernityDB-HTTP), and also Python client library
(CodernityDB-PyClient) that aims to be 100% compatible with embeded
version.
You can call it a more advanced key-value database. With
multiple key-values indexes in the same engine. Also CodernityDB
supports functions that are executed inside database.
Key features
Native python database
Multiple indexes
Fast (more than 50 000 insert operations per second see Speed in documentation for details)
Embeded mode (default) and Server (CodernityDB-HTTP), with client
library (CodernityDB-PyClient) that aims to be 100% compatible with
embeded one.
Easy way to implement custom Storage
Install
Because CodernityDB is pure Python you need to perform standard installation for Python applications:
DuckieTV can be installed as either a standalone
application on Windows (7, 8.1, 10, 11), Linux (Debian based such as
Ubuntu 15.10 and newer), and Mac OSX (10.15 and newer), or installed as a Chrome's Extension (in development mode).
Install DuckieTV Standalone
As of v0.81, DuckieTV is available as a standalone build.
Get the latest release here:
Install DuckieTV For Chrome, Safari, Opera, Vivaldi or Edge.
DuckieTV for Chrome comes in 2 versions: One that installs
itself as your browser's "new tab" page, and one that just provides an
easily accessible button to open DuckieTV.
Due to changes to the Google Chrome Web Store security rules (Dec 2019), Dtv is no longer being accepted as an extension app.
Currently the only way to run Dtv as a Chrome extension, is to manually install it under the development mode extensions page.
Transmageddon is a video transcoder for Linux and Unix systems built
using GStreamer. It supports almost any format as its input and can
generate a very large host of output files. The goal of the application
was to help people to create the files they need to be able to play on
their mobile devices and for people not hugely experienced with multimedia to
generate a multimedia file without having to resort to command line tools
with ungainly syntaxes.
For information about latest releases check the NEWS file.
A recent version of Firefox, Chrome, Opera, etc or IE≥9
Apache 2.2 on a posix system (linux, solaris, etc) (apache 2.0 may also work)
PHP > 5.2.x
PHP SQlite PDO, SQlite >3.6.14.1
depending on your distribution, you may have to also install
packages "php-posix", "php-mbstring", "php5-gd", "php5-json"
"php5-sqlite" "php-pdo"
It has been reported to me that it also runs under MS-Windows but I cannot test it.
Installation instructions
extract the files in a web-exported directory (under the "DocumentRoot")
rename pure.db to itdb.db (pure.db is a blank database)
make the data/itdb.db file AND the data/ directory AND the data/files/ directory readable and writeable by the web server
make translations/ directory readable and writeable by the web server
Login with admin/admin
If you need to find out which sqlite library is used by
your apache/php installation, browse to itdb/phpinfo.php or press the
small blue (i) on the bottom left of the itdb menu.
LibrePlan is a free software web application for project management,
monitoring and control.
LibrePlan is a collaborative tool to plan, monitor and control projects and
has a rich web interface which provides a desktop alike user experience. All the
team members can take part in the planning and this makes possible to have a
real-time planning.
It was designed thinking on a scenario where multiple projects and resources
interact to carry out the work inside a company. Besides, it makes possible
the communication with other company tools providing a wide set of web
services to import and export data.
It is very important to execute the previous command specifiying
libreplan user (as you can see in the -U option). Otherwise your
LibrePlan installation is not going to start properly and you could find in
your log files something like that:
JDBCExceptionReporter - ERROR: permission denied for relation entity_sequence
Add next lines to Tomcat 8 policy file /etc/tomcat8/catalina.policy or /var/lib/tomcat8/conf or /etc/tomcat8/policy.d/03catalina.policy
with the following content:
grant codeBase "file:/var/lib/tomcat8/webapps/libreplan/-" {
permission java.security.AllPermission;
};
grant codeBase "file:/var/lib/tomcat8/webapps/libreplan.war" {
permission java.security.AllPermission;
};
phpMyFAQ needs to be installed on a web server. FAQ administrators and
users have to use a web browser to access a web-based GUI to read and
add FAQs. phpMyFAQ administrators require access to the files on the
server to update templates and perform upgrades or maintenance.
You should add the code to the httpd-mpm.conf file and enable that file in
Include conf/extra/httpd-mpm.conf
Operating system support
GNU/Linux
Microsoft Windows
OS X
FreeBSD
HP-UX
Solaris
AIX
Netware
Browser support
Mozilla Firefox
Google Chrome
Apple Safari
Opera
Microsoft Edge
In case PHP runs as module of the Apache, you will have to be able to
do a chown on the files before installation. The files and directories
have to be owned by the Apache user.
memleax debugs memory leak of a running process by attaching it,
without recompiling or restarting.
status
Because the debugging work depends on CPU architecture and OS heavily,
and I test memleax only on several programs, and it is not used widely
by now. So there must be bugs.
Some known bugs for debugging multi-thread program,
#38 and
#39.
Besides, I write a new tool libleak,
which works by hooking memory functions by LD_PRELOAD.
It's much simpler and has much less impact on performance.
So I am not going to improve memleax. Try libleak please.
how it works
memleax debugs memory leak of a running process by attaching it.
It hooks the target process's invocation of memory allocation and free,
and reports the memory blocks which live long enough as memory leak, in real time.
The default expire threshold is 10 seconds, however you should always
set it by -e option according to your scenarios.
It is very convenient to use, and suitable for production environment.
There is no need to recompile the program or restart the target process.
You run memleax to monitor the target process, wait for the real-time memory
leak report, and then kill it (e.g. by Ctrl-C) to stop monitoring.
memleax follows new threads, but not forked processes.
If you want to debug multiple processes, just run multiple memleax.
For Arch Linux users, memleax is available in AUR. Thanks to jelly.
For FreeBSD users, memleax is available in FreeBSD Ports Collection.
Thanks to tabrarg.
I tried to submit memleax to Fedora EPEL,
but failed. Any help is welcomed.
build from source
The development packages of the following libraries are required:
libunwind
libelf
libdw or libdwarf. libdw is preferred. They are used to read dwarf debug-line
information. If you do not have them neither, set --disable-debug_line to
configure to disable it. As a result you will not see file name and line
number in backtrace.
These packages may have different names in different distributions, such as
libelf may names libelf, elfutils-libelf, or libelf1.
NOTE: On FreeBSD 10.3, there are built-in libelf and libdwarf already.
However another libelf and libdwarf still can be installed by pkg.
memleax works with built-in libelf and pkg libdwarf. So you should
install libdwarf by pkg, and must not install libelf by pkg.
After all required libraries are installed, run
$ mkdir build
$ cd build
$ cmake ..
$ make
$ sudo make install
usage
start
To debug a running process, run:
$ memleax [options] <target-pid>
then memleax begins to monitor the target process, and report memory leak in real time.
You should always set expire time by -e options according to your scenarios.
For example, if you are debugging an HTTP server with keepalive, and there are
connections last for more than 5 minutes, you should set -e 360 to cover it.
If your program is expected to free every memory in 1 second, you should set -e 2
to get report in time.
name: spiderpig help: does whatever a spiderpig does commands: - name: swing help: swings from a web output: | I can't do that, I'm a pig! - name: plop help: super secret maneuver output: | Look out!
运行 fauxcli:
$ fauxcli does whatever a spiderpig does
Usage: spiderpig [command]
Available Commands: swing swings from a web plop super secret maneuver
Flags: -h, --help help for spiderpig
Use "spiderpig [command] --help" for more information about a command.
子命令:
$ fauxcli swing I can't do that, I'm a pig!
别名:
$ alias spiderpig='fauxcli' $ spiderpig plop Look out!
# output to print when the command is run # if this key is omitted, the command will act as a # parent to any subcommands, essentially doing nothing # but printing the help text output: | Hello, World!
# flags available to the command flags: # (required) long name of the flag (--debug) - name: debug
# (required) help text for the flag help: enables debugging
# short name for the flag (-d) short: d
# default value of the flag default: false
# make the flag globally available global: true
# the type of the value (default string) # available types: # - string # - bool # - int # - float type: bool
# subcommands (nested from all the options above) commands: - name: subcommand1 help: a subcommand flags: - name: upper help: converts output to uppercase short: u type: bool output: | {{ if .Flags.upper.Bool -}} HELLO FROM SC1! {{ else -}} Hello from SC1! {{ end -}} - name: subcommand2 help: another subcommand with children commands: - name: child1 help: the first child command output: | Hello from child1 - name: child2 help: the second child command output: | Hello from child2
FauxCLI 使用 Go 编写,托管在 GitHub: https://github.com/nextrevision/fauxcli
The programming language SQUIRREL 3.2 stable
This project has successfully been compiled and run on
* Windows (x86 and amd64)
* Linux (x86, amd64 and ARM)
* Illumos (x86 and amd64)
* FreeBSD (x86 and ARM)
from https://github.com/albertodemichelis/squirrel
If you want to build the shared libraries under Windows using Visual Studio, you will have to use CMake version 3.4 or newer. If not, an earlier version will suffice. For a traditional out-of-source build under Linux, type something like
$ mkdir build # Create temporary build directory $ cd build $ cmake .. # CMake will determine all the necessary information, # including the platform (32- vs. 64-bit) $ make $ make install $ cd ..; rm -r build
The default installation directory will be /usr/local on Unix platforms, and C:/Program Files/squirrel on Windows. The binaries will go into bin/ and the libraries into lib/. You can change this behavior by calling CMake like this:
With the CMAKE_INSTALL_BINDIR and CMAKE_INSTALL_LIBDIR options, the directories the binaries & libraries will go in (relative to CMAKE_INSTALL_PREFIX) can be specified. For instance,
$ cmake .. -DCMAKE_INSTALL_LIBDIR=lib64
will install the libraries into a 'lib64' subdirectory in the top source directory. The public header files will be installed into the directory the value of CMAKE_INSTALL_INCLUDEDIR points to. If you want only the binaries and no headers, just set -DSQ_DISABLE_HEADER_INSTALLER=ON, and no header files will be installed.
Under Windows, it is probably easiest to use the CMake GUI interface, although invoking CMake from the command line as explained above should work as well.
GCC USERS
There is a very simple makefile that compiles all libraries and exes from the root of the project run 'make'
for 32 bits systems
$ make
for 64 bits systems
$ make sq64
VISUAL C++ USERS
Open squirrel.dsw from the root project directory and build(dho!)
DOCUMENTATION GENERATION
To be able to compile the documentation, make sure that you have Python installed and the packages sphinx and sphinx_rtd_theme. Browse into doc/ and use either the Makefile for GCC-based platforms or make.bat for Windows platforms.
from https://github.com/albertodemichelis/squirrel/blob/master/COMPILE
21 is an open source Python library and command line interface for
quickly building machine-payable web services. It allows you to
accomplish three major tasks:
Get bitcoin on any device
Add bitcoin micropayments to any Django or Flask app
Earn bitcoin on every HTTP request
The package includes:
an HD wallet to securely manage your bitcoin
crypto and bitcoin libraries to build bitcoin/blockchain applications
commands for mining, buying, and earning bitcoin, as well as requesting it from the 21 faucet
tools for publishing machine-payable endpoints to the 21 Marketplace
containers that allow your machine to sell machine resources for bitcoin
and much more.
Security
Please note that the 21 software is in beta. To protect the security
of your systems while using 21, we highly recommend you install the
software on a device other than your main laptop (e.g. 21 Bitcoin
Computer, an old laptop, or an Amazon Virtual Machine) while the
product is still in beta. You can read more security-related
information here. Please send an
email to security@21.co regarding any issue
concerning security.
Installation
Create an account or install the library and CLI
(python3.4+ is required):
Giada source code is hosted and mantained on GitHub. It requires a C++20-compatible compiler, Git and CMake already installed. This document is about setting up Giada from the command line, but you can also configure and build it directly in your IDE.
Grab the code
First of all clone the remote repository on your machine:
git clone git@github.com:monocasual/giada
a new folder giada/ will be created. Go inside and initialize the submodules (i.e. the dependencies):
git submodule update --init --recursive
Configure and build
Invoke CMake from inside the giada/ folder as follows:
cmake -B <build-directory> -S .
For example:
cmake -B build/ -S .
CMake will generate the proper project according to your
environment: Makefile on Linux, Visual Studio solution on Windows, XCode
project on macOS. When the script is done without errors, open the
generated project with your IDE or run CMake from the command line to
compile Giada. Command line example:
cmake --build build/
Dependencies
Some dependencies are included as git
submodules. However, Giada requires other external libraries to be
installed on your system. Namely:
We are excited to announce the transfer of Hygieia Project to its own GitHub Organization.
This move is being made to allow for us to manage the apis and
individual collectors in their own repositories which renders for better
product management. All components of Hygieia are now available under
the Hygieia Organization.