Docker with CLIDocker Image can be found here .
wget -O .env https://raw.githubusercontent.com/fr0tt/benotes/master/.env.sqlite.example
copy the content of this file into a new file called
.envby setting a random secret for
APP_KEY(generated with e.g.
openssl rand -base64 32) and
- only if you do not want to use SQLite: also edit
DB_PORTaccording to your database. (You can have a look at classic for this)
docker run -p 8000:80 -itd --rm \ -v benotes_storage:/var/www/storage \ -v "$(pwd)"/nginx/logs/:/var/lib/nginx/logs/ \ -v "$(pwd)"/.env:/var/www/.env \ --env-file ./.env \ --name benotes fr0tt/benotes
run the docker container (with a named volume to store data, a bind mount for webserver logs and env variables from your .env file)
docker exec -it benotes sh
reference the image by its name, in this case as previously defined: benotes
php artisan install --only-user
execute those two commands in your container to initialize the application
- rerun the container in order to use the latest build
What benefit would you gain from doing so ?
Currently the only difference would be that every thumbnail from every link would use the original location of every thumbnail, meaning that your browser would need to communicate with each of them - instead of accessing them all shrinked and from on single source, your local filesystem or a S3 bucket. Every Object Storage that is S3 compliant can be used such as e.g. AWS S3, Digital Ocean Spaces or Minio.
- Create a bucket and make it public
- Add to your
.envfile the following with correct corresponding values:
FILESYSTEM_DRIVER = s3 AWS_ACCESS_KEY_ID = yourKeyId AWS_SECRET_ACCESS_KEY = yourAccessKey AWS_DEFAULT_REGION = us-east-1 AWS_BUCKET = yourCreativeBucketName AWS_ENDPOINT = endpointUrl