无来

不管你来还是不来
我都在这里,夜夜点亮
不是为了守候
只是为了做好我自己

0%

he session is still attached on another terminal. The server hasn’t detected the network outage on that connection: it only detects the outage when it tries to send a packet and gets an error back or no response after a timeout, but this hasn’t happened yet. You’re in a common situation where the client detected the outage because it tried to send some input and failed, but the server is just sitting there waiting for input. Eventually the server will send a keepalive packet and detect that the connection is dead.

In the meantime, use the -d option to detach the screen session from the terminal where it’s in.

1
2
screen -r -d 30608
screen -rd is pretty much the standard way to attach to an

existing screen session.

MySQL provides an easy mechanism for writing the results of a select statement into a text file on the server. Using extended options of the INTO OUTFILE nomenclature, it is possible to create a comma separated value (CSV) which can be imported into a spreadsheet application such as OpenOffice or Excel or any other applciation which accepts data in CSV format.

Given a query such as

1
SELECT order_id,product_name,qty FROM orders

which returns three columns of data, the results can be placed into the file /tmo/orders.txt using the query:

1
2
SELECT order_id,product_name,qty FROM orders
INTO OUTFILE '/tmp/orders.txt'

This will create a tab-separated file, each row on its own line. To alter this behavior, it is possible to add modifiers to the query:

1
2
3
4
5
SELECT order_id,product_name,qty FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'

In this example, each field will be enclosed in “double quotes,” the fields will be separated by commas, and each row will be output on a new line separated by a newline (\n). Sample output of this command would look like:

1
2
"1","Tech-Recipes sock puppet","14.95" "2","Tech-Recipes chef's hat","18.95"
...

Keep in mind that the output file must not already exist and that the user MySQL is running as has write permissions to the directory MySQL is attempting to write the file to.

The following Magento 2’s CLI command will update the Magento base-url and the base-url-secure values.

Go to the Magento’s root directory, then type within the console:

php bin/magento setup:store-config:set --base-url="http://localhost:8080/"

Replacing http://localhost:8080/ with your new base-url.

You may want to change the base-url-secure also:

php bin/magento setup:store-config:set --base-url-secure="https://localhost:8080/"

Note: both base-url and base-url-secure values must contain the URL’s scheme, http:// or https://, and a trailing slash /.

Then clear the cache:

php bin/magento cache:flush

Troubleshooting
Clear current values from the database
Can happen that the above command doesn’t works as expected and you still have some url pointing to the old base-url. In these cases you have to clear some values in your db.

Open the Magento 2 database with your favorite MySQL tool then go to the core_config_data table.

Search for rows having these values in the column path (note that there could be more than one row for each value):

  • “web/unsecure/base_url”
  • “web/secure/base_url”
    Delete these rows (Magento will recreate them).

Now you can set the base-url value using the above CLI command.

Single-Store Mode option enabled
If you have the Single-Store Mode option enabled this could bring to some problem with setting the base-url with the command line.

In this case you should modify the base-url using only the command line and not the Magento’s Admin Panel. If you already saved the Base url field value using the Admin Panel you should clear values within the Magento’s core_config_data table as described above.

1、remove exit containers

Until that command is available, you can string docker commands together with other unix commands to get what you need. Here is an example on how to clean up old containers that are weeks old.

$ docker ps --filter "status=exited" | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm

2、remove images

$ docker images | grep "<none>" | awk '{print $3}' | xargs docker rmi

3 rm all stopped containers

If you want to use awk for this consider this command if you want do rm all stopped containers (without a error because of the first line):

$ docker ps -a | awk 'NR > 1 {print $1}' | xargs docker rm

今天介绍个文件名转码的工具–convmv,convmv能帮助我们很容易地对一个文件,一个目录下所有文件进行编码转换,比如gbk转为utf8等。

语法:

convmv [options] FILE(S) … DIRECTORY(S)

主要选项:

  • 1、-f ENCODING
    指定目前文件名的编码,如-f gbk

  • 2、-t ENCODING
    指定将要转换成的编码,如-t utf-8

  • 3、-r
    递归转换目录下所有文件名

  • 4、–list
    列出所有支持的编码

  • 5、–notest
    默认是只打印转换后的效果,加这个选项才真正执行转换操作。

  • 更多选项请man convmv。

例子:
递归转换centos目录下的目前文件名编码gbk为utf-8:

convmv -f gbk -t utf-8 --notest -r  centos

脚本 latest-ffmpeg-centos6.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
# source: https://trac.ffmpeg.org/wiki/CentosCompilationGuide

yum install autoconf automake gcc gcc-c++ git libtool make nasm pkgconfig zlib-devel

mkdir ~/ffmpeg_sources

cd ~/ffmpeg_sources
curl -O http://www.tortall.net/projects/yasm/releases/yasm-1.2.0.tar.gz
tar xzvf yasm-1.2.0.tar.gz
cd yasm-1.2.0
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin"
make
make install
make distclean
. ~/.bash_profile

cd ~/ffmpeg_sources
git clone --depth 1 git://git.videolan.org/x264
cd x264
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-static
make
make install
make distclean

cd ~/ffmpeg_sources
git clone --depth 1 git://github.com/mstorsjo/fdk-aac.git
cd fdk-aac
autoreconf -fiv
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
make distclean

cd ~/ffmpeg_sources
curl -L -O http://downloads.sourceforge.net/project/lame/lame/3.99/lame-3.99.5.tar.gz
tar xzvf lame-3.99.5.tar.gz
cd lame-3.99.5
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --disable-shared --enable-nasm
make
make install
make distclean

cd ~/ffmpeg_sources
curl -O http://downloads.xiph.org/releases/opus/opus-1.0.3.tar.gz
tar xzvf opus-1.0.3.tar.gz
cd opus-1.0.3
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
make distclean

cd ~/ffmpeg_sources
curl -O http://downloads.xiph.org/releases/ogg/libogg-1.3.1.tar.gz
tar xzvf libogg-1.3.1.tar.gz
cd libogg-1.3.1
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
make distclean

cd ~/ffmpeg_sources
curl -O http://downloads.xiph.org/releases/vorbis/libvorbis-1.3.3.tar.gz
tar xzvf libvorbis-1.3.3.tar.gz
cd libvorbis-1.3.3
./configure --prefix="$HOME/ffmpeg_build" --with-ogg="$HOME/ffmpeg_build" --disable-shared
make
make install
make distclean

cd ~/ffmpeg_sources
git clone --depth 1 https://chromium.googlesource.com/webm/libvpx
cd libvpx
git checkout tags/v1.3.0
./configure --prefix="$HOME/ffmpeg_build" --disable-examples
make
make install
make clean

cd ~/ffmpeg_sources
curl -O http://downloads.xiph.org/releases/theora/libtheora-1.1.1.tar.gz
tar xzvf libtheora-1.1.1.tar.gz
cd libtheora-1.1.1
./configure --prefix="$HOME/ffmpeg_build" --with-ogg="$HOME/ffmpeg_build" --disable-examples --disable-shared --disable-sdltest --disable-vorbistest
make
make install
make distclean

yum -y install freetype-devel speex-devel

cd ~/ffmpeg_sources
git clone --depth 1 git://source.ffmpeg.org/ffmpeg
cd ffmpeg
PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig"
export PKG_CONFIG_PATH
./configure --prefix="$HOME/ffmpeg_build" --extra-cflags="-I$HOME/ffmpeg_build/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib" --bindir="$HOME/bin" --extra-libs="-ldl" --enable-gpl --enable-nonfree --enable-libfdk_aac --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libfreetype --enable-libspeex --enable-libtheora
make
make install
make distclean
hash -r
. ~/.bash_profile

cd ~/ffmpeg_sources/ffmpeg/tools
make qt-faststart
cp qt-faststart /usr/bin
ldconfig
cd

FAQ:

  • ERROR: libvpx decoder version must be >=0.9.1

    sudo yum install libvpx.x86_64 libvpx-devel.x86_64

1
2
3
4
5
root@docker:~/drupal# docker inspect drupal_mysql | grep MYSQL_ROOT_PASSWORD
"MYSQL_ROOT_PASSWORD=test",
root@docker:~/drupal# docker exec -it drupal_mysql mysql -u root -p
Enter password: test
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

I’ve gratuitously rebuilt, recreated, --no-cache'd, --force-recreate'd.. nothing is working. I’ve also tried putting the environment variable directly in a mysql Dockerfile and as an “environment” argument in my docker-compose.yml.

The only thing that IS working is passing -e ‘MYSQL_ROOT_PASSWORD=test’ in a docker run statement.

我使用crontab同步一个文件夹时,发现一个问题,我在crontab中设置的1分钟运行一次.但当那个文件夹的内容改变时。1分钟不一定能同步完,但这时第二个rsync进行又起来了。

这个就产生一个问题,二个rsync一起处理相同的文件,这样会出问题。如下

* * * * /usr/bin/rsync -avlR /data/files    172.16.xxx.xxx:/data

本来想写个脚本来解决,但太麻烦。所以用了个linux下的锁。。呵呵,象下面这个.

1 * * * * flock -xn /var/run/rsync.lock -c "rsync -avlR /data/files    172.16.xxx.xxx:/data"

这样,使用flock的-x参数先建一个锁文件,然后-n指定,如果锁存在,就等待。直到建成功锁在会运行-c后面的命令。这样第一个进程没有运行完前,锁文件都会存在。这样就不会二个rsync同时并发处理一个东西了

来源: http://www.cnblogs.com/cmsd/p/3697049.html

需求: 同步某个目录下所有的图片(.jpg),该目录下有很多其他的文件,但只想同步.jpg的文件。

rsync 有一个–exclude 可以排除指定文件,还有个–include选项的作用正好和–exclude相反。
那直接使用–include=”*.jpg”是否可以呢?

rsync -av –include=”*.jpg” /src/ /des/

实验证明,这样是不对的。

而正确的答案是
rsync -av –include=”.jpg” –exclude= /src/ /des/

MySQL导出的SQL语句在导入时有可能会非常非常慢,经历过导入仅45万条记录,竟用了近3个小时。在导出时合理使用几个参数,可以大大加快导 入的速度。

  • -e 使用包括几个VALUES列表的多行INSERT语法;
  • -max_allowed_packet=XXX 客户端/服务器之间通信的缓存区的最大大小;
  • --net_buffer_length=XXX TCP/IP和套接字通信缓冲区大小,创建长度达net_buffer_length的行。

注意:max_allowed_packet和net_buffer_length不能比目标数据库的设定数值 大,否则可能出错。

首先确定目标库的参数值

1
2
mysql>show variables like 'max_allowed_packet';
mysql>show variables like 'net_buffer_length';

根据参数值书写mysqldump命令,如:

E:\eis>mysqldump -uroot -p eis_db goodclassification -e --max_allowed_packet=1048576 --net_buffer_length=16384 >good3.sql

之前2小时才能导入的sql现在几十秒就可以完成了。