Sqlite-gui: еще один редактор sqlite для windows

Version Control

SQLite sources are managed using the
Fossil, a distributed version control system
that was specifically designed and written to support SQLite development.
The Fossil repository contains the urtext.

If you are reading this on GitHub or some other Git repository or service,
then you are looking at a mirror. The names of check-ins and
other artifacts in a Git mirror are different from the official
names for those objects. The offical names for check-ins are
found in a footer on the check-in comment for authorized mirrors.
The official check-in name can also be seen in the file
in the root of the tree. Always use the official name, not the
Git-name, when communicating about an SQLite check-in.

If you pulled your SQLite source code from a secondary source and want to
verify its integrity, there are hints on how to do that in the
section below.

Linux

DB Browser for SQLite works well on Linux.

Debian

Note that Debian focuses more on stability rather than newest features. Therefore packages will typically contain some older (but well tested) version, compared to the latest release.

Update the cache using:

Install the package using:

Ubuntu and Derivatives

Stable release

For Ubuntu and derivatives, @deepsidhu1313
provides a PPA with the latest release here:

https://launchpad.net/~linuxgndu/+archive/ubuntu/sqlitebrowser

To add this ppa just type in these commands in terminal:

Then update the cache using:

Install the package using:

Ubuntu 14.04.X, 15.04.X, 15.10.X and 16.04.X are supported for now (until
Launchpad decides to discontinue building for any series).

Ubuntu Precise (12.04) and Utopic (14.10) are not supported:

  • Precise does not have a new enough Qt package in its repository by default,
    which is a dependency
  • Launchpad does not support Utopic any more, which has reached its End of
    Life

Nightly builds

Nightly builds are available here:

https://launchpad.net/~linuxgndu/+archive/ubuntu/sqlitebrowser-testing

To add this ppa, type these commands into the terminal:

Then update the cache using:

Install the package using:

On others, compile DB4S using the instructions
in BUILDING.md.

Obtaining The Code

If you do not want to use Fossil, you can download tarballs or ZIP
archives or as follows:

  • Lastest trunk check-in as
    Tarball,
    ZIP-archive, or
    SQLite-archive.

  • Latest release as
    Tarball,
    ZIP-archive, or
    SQLite-archive.

  • For other check-ins, substitute an appropriate branch name or
    tag or hash prefix in place of «release» in the URLs of the previous
    bullet. Or browse the timeline
    to locate the check-in desired, click on its information page link,
    then click on the «Tarball» or «ZIP Archive» links on the information
    page.

If you do want to use Fossil to check out the source tree,
first install Fossil version 2.0 or later.
(Source tarballs and precompiled binaries available
here. Fossil is
a stand-alone program. To install, simply download or build the single
executable file and put that file someplace on your $PATH.)
Then run commands like this:

After setting up a repository using the steps above, you can always
update to the lastest version using:

Or type «fossil ui» to get a web-based user interface.

7.6. Export to CSV

To export an SQLite table (or part of a table) as CSV, simply set
the «mode» to «csv» and then run a query to extract the desired rows
of the table.

sqlite> .headers on
sqlite> .mode csv
sqlite> .once c:/work/dataout.csv
sqlite> SELECT * FROM tab1;
sqlite> .system c:/work/dataout.csv

In the example above, the «.headers on» line causes column labels to
be printed as the first row of output. This means that the first row of
the resulting CSV file will contain column labels. If column labels are
not desired, set «.headers off» instead. (The «.headers off» setting is
the default and can be omitted if the headers have not been previously
turned on.)

The line «.once FILENAME» causes all query output to go into
the named file instead of being printed on the console. In the example
above, that line causes the CSV content to be written into a file named
«C:/work/dataout.csv».

The final line of the example (the «.system c:/work/dataout.csv»)
has the same effect as double-clicking on the c:/work/dataout.csv file
in windows. This will typically bring up a spreadsheet program to display
the CSV file.

That command only works as written on Windows.
The equivalent line on a Mac would be:

sqlite> .system open dataout.csv

On Linux and other unix systems you will need to enter something like:

sqlite> .system xdg-open dataout.csv

7.6.1. Export to Excel

To simplify export to a spreadsheet, the CLI provides the
«.excel» command which captures the output of a single query and sends
that output to the default spreadsheet program on the host computer.
Use it like this:

sqlite> .excel
sqlite> SELECT * FROM tab;

The command above writes the output of the query as CSV into a temporary
file, invokes the default handler for CSV files (usually the preferred
spreadsheet program such as Excel or LibreOffice), then deletes the
temporary file. This is essentially a short-hand method of doing
the sequence of «.csv», «.once», and «.system» commands described above.

The «.excel» command is really an alias for «.once -x». The -x option
to .once causes it to writes results as CSV into a temporary file that
is named with a «.csv» suffix, then invoke the systems default handler
for CSV files.

There is also a «.once -e» command which works similarly, except that
it names the temporary file with a «.txt» suffix so that the default
text editor for the system will be invoked, instead of the default
spreadsheet.

8. Accessing ZIP Archives As Database Files

In addition to reading and writing SQLite database files,
the sqlite3 program will also read and write ZIP archives.
Simply specify a ZIP archive filename in place of an SQLite database
filename on the initial command line, or in the «.open» command,
and sqlite3 will automatically detect that the file is a
ZIP archive instead of an SQLite database and will open it as such.
This works regardless of file suffix. So you can open JAR, DOCX,
and ODP files and any other file format that is really a ZIP
archive and SQLite will read it for you.

A ZIP archive appears to be a database containing a single table
with the following schema:

CREATE TABLE zip(
  name,     // Name of the file
  mode,     // Unix-style file permissions
  mtime,    // Timestamp, seconds since 1970
  sz,       // File size after decompression
  rawdata,  // Raw compressed file data
  data,     // Uncompressed file content
  method    // ZIP compression method code
);

So, for example, if you wanted to see the compression efficiency
(expressed as the size of the compressed content relative to the
original uncompressed file size) for all files in the ZIP archive,
sorted from most compressed to least compressed, you could run a
query like this:

sqlite> SELECT name, (100.0*length(rawdata))/sz FROM zip ORDER BY 2;

Or using , you can extract elements of the
ZIP archive:

sqlite> SELECT writefile(name,content) FROM zip
   ...> WHERE name LIKE 'docProps/%';

Исключения SQLite3

Исключением являются ошибки времени выполнения скрипта. При программировании на Python все исключения являются экземплярами класса производного от BaseException.

В SQLite3 у есть следующие основные исключения Python:

DatabaseError

Любая ошибка, связанная с базой данных, вызывает ошибку DatabaseError.

IntegrityError

IntegrityError является подклассом DatabaseError и возникает, когда возникает проблема целостности данных, например, когда внешние данные не обновляются во всех таблицах, что приводит к несогласованности данных.

ProgrammingError

Исключение ProgrammingError возникает, когда есть синтаксические ошибки или таблица не найдена или функция вызывается с неправильным количеством параметров / аргументов.

OperationalError

Это исключение возникает при сбое операций базы данных, например, при необычном отключении. Не по вине программиста.

NotSupportedError

При использовании некоторых методов, которые не определены или не поддерживаются базой данных, возникает исключение NotSupportedError.

SQLite3 manager LITE

Сайт производителя: http://www.pool-magic.net/sqlite-manager.htm

Цена: .

Критерий Оценка (от 0 до 2)
Функциональность 2
Цена 2
Работа с UTF-8
Русский интерфейс
Удобство 1
Итог 5

По сравнению с предыдущей программой “SQLite3 manager LITE” выглядит более функциональным. Кроме того, что можно просто просматривать данные в таблицах, также можно просматривать и создавать триггеры, индексы, представления и т.д. Дополнительно можно экспортировать все мета-данные базы данных. При этом можно создавать файлы с данными для экспорта таблиц в Paradox и Interbase.

Также в программе была предпринята попытка зделать, что-то вроде визуального мастера создания запросов наподобие MS Access, но, на мой взгляд, попытка успехом не увенчалась.

У бесплатной версии есть один недостаток – не понимает данные в кодировке UTF-8. Есть, конечно, возможность указать кодировку базы данных при открытии файла, но в списке кодировок UTF-8 отсутствует. Как работает Full-версия программы я так и не увидел, т.к. на сайте производителя чёрт ногу сломит. Висит какой-то непонятный javascript, выводящий непонятную инфу. В общем, сложилось впечатление, что проект успешно заглох.

7.4. The edit() SQL function

The CLI has another build-in SQL function named edit(). Edit() takes
one or two arguments. The first argument is a value — usually a large
multi-line string to be edited. The second argument is the name of a
text editor. If the second argument is omitted, the VISUAL environment
variable is used. The edit() function writes its first argument into a
temporary file, invokes the editor on the temporary file, rereads the file
back into memory after the editor is done, then returns the edited text.

The edit() function can be used to make changes to large text
values. For example:

sqlite> UPDATE docs SET body=edit(body) WHERE name='report-15';

In this example, the content of the docs.body field for the entry where
docs.name is «report-15» will be sent to the editor. After the editor returns,
the result will be written back into the docs.body field.

The default operation of edit() is to invoke a text editor. But by using
an alternative edit program in the second argument, you can also get it to edit
images or other non-text resources. For example, if you want to modify a JPEG
image that happens to be stored in a field of a table, you could run:

sqlite> UPDATE pics SET img=edit(img,'gimp') WHERE id='pic-1542';

The edit program can also be used as a viewer, by simply ignoring the
return value. For example, to merely look at the image above, you might run:

sqlite> SELECT length(edit(img,'gimp')) WHERE id='pic-1542';

Как создавать базу данных и вставлять различные данные

Создание базы данных в SQLite – это очень просто, но процесс требует того, чтобы вы немного разбирались в том, что такое SQL. Давайте взглянем на код, который создаст базу данных для хранения музыкальных альбомов:

Python

import sqlite3

conn = sqlite3.connect(«mydatabase.db») # или :memory: чтобы сохранить в RAM
cursor = conn.cursor()

# Создание таблицы
cursor.execute(«»»CREATE TABLE albums
(title text, artist text, release_date text,
publisher text, media_type text)
«»»)

1
2
3
4
5
6
7
8
9
10

importsqlite3

conn=sqlite3.connect(«mydatabase.db»)# или :memory: чтобы сохранить в RAM

cursor=conn.cursor()

 
# Создание таблицы

cursor.execute(«»»CREATE TABLE albums

                  (title text, artist text, release_date text,
                   publisher text, media_type text)

               «»»)

Сначала нам нужно импортировать модуль sqlite3 и создать связь с базой данных. Вы можете передать название файла или просто использовать специальную строку “:memory:” для создания базы данных в памяти. В нашем случае, мы создаем его на диске в файле под названием mydatabase.db.

Далее мы создаем объект cursor, который позволяет нам взаимодействовать с базой данных и добавлять записи, помимо всего прочего. Здесь мы используем синтаксис SQL для создания таблицы под названием альбомы с пятью следующими полями: title, artist, release_date, publisher и media_type. SQLite поддерживает только пять типов данных: null, integer, real, text и blob. Давайте напишем этот код и вставим кое-какие данные в нашей новой таблице. Запомните, если вы запускаете команду CREATE TABLE, при этом база данных уже существует, вы получите сообщение об ошибке.

Python

# Вставляем данные в таблицу
cursor.execute(«»»INSERT INTO albums
VALUES (‘Glow’, ‘Andy Hunter’, ‘7/24/2012’,
‘Xplore Records’, ‘MP3’)»»»
)

# Сохраняем изменения
conn.commit()

# Вставляем множество данных в таблицу используя безопасный метод «?»
albums = [(‘Exodus’, ‘Andy Hunter’, ‘7/9/2002’, ‘Sparrow Records’, ‘CD’),
(‘Until We Have Faces’, ‘Red’, ‘2/1/2011’, ‘Essential Records’, ‘CD’),
(‘The End is Where We Begin’, ‘Thousand Foot Krutch’, ‘4/17/2012’, ‘TFKmusic’, ‘CD’),
(‘The Good Life’, ‘Trip Lee’, ‘4/10/2012’, ‘Reach Records’, ‘CD’)]

cursor.executemany(«INSERT INTO albums VALUES (?,?,?,?,?)», albums)
conn.commit()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

# Вставляем данные в таблицу

cursor.execute(«»»INSERT INTO albums

                  VALUES (‘Glow’, ‘Andy Hunter’, ‘7/24/2012’,
                  ‘Xplore Records’, ‘MP3’)»»»

)

 
# Сохраняем изменения

conn.commit()

 
# Вставляем множество данных в таблицу используя безопасный метод «?»

albums=(‘Exodus’,’Andy Hunter’,’7/9/2002′,’Sparrow Records’,’CD’),

(‘Until We Have Faces’,’Red’,’2/1/2011′,’Essential Records’,’CD’),

(‘The End is Where We Begin’,’Thousand Foot Krutch’,’4/17/2012′,’TFKmusic’,’CD’),

(‘The Good Life’,’Trip Lee’,’4/10/2012′,’Reach Records’,’CD’)

cursor.executemany(«INSERT INTO albums VALUES (?,?,?,?,?)»,albums)

conn.commit()

Здесь мы использовали команду INSERT INTO SQL чтобы вставить запись в нашу базу данных

Обратите внимание на то, что каждый объект находится в одинарных кавычках. Это может усложнить работу, если вам нужно вставить строчки, которые содержат одинарные кавычки

В любом случае, чтобы сохранить запись в базе данных, нам нужно создать её. Следующая часть кода показывает, как добавить несколько записей за раз при помощи метода курсора executemany. Обратите внимание на то, что мы используем знаки вопроса (?), вместо строк замещения (%) чтобы вставить значения. Обратите внимание, что использование строки замещения не безопасно, так как может стать причиной появления атаки инъекций SQL . Использование знака вопроса намного лучше, а использование SQLAlchemy тем более, так как он делаете все необходимое, чтобы уберечь вас от правки встроенных одинарных кавычек на то, что SQLite в состоянии принимать.

Создание базы данных

После создания соединения с SQLite, файл БД создается автоматически, при условии его отсутствия. Этот файл создается на диске, но также можно создать базу данных в оперативной памяти, используя параметр «:memory:» в методе connect. При этом база данных будет называется инмемори.

Рассмотрим приведенный ниже код, в котором создается БД с блоками try, except и finally для обработки любых исключений:

Сначала импортируется модуль sqlite3, затем определяется функция с именем sql_connection. Внутри функции определен блок try, где метод connect() возвращает объект соединения после установления соединения.

Затем определен блок исключений, который в случае каких-либо исключений печатает сообщение об ошибке. Если ошибок нет, соединение будет установлено, тогда скрипт распечатает текст «Connection is established: Database is created in memory».

Далее производится закрытие соединения в блоке finally. Закрытие соединения необязательно, но это хорошая практика программирования, позволяющая освободить память от любых неиспользуемых ресурсов.

2.1. Restrictions on UPDATE Statements Within CREATE TRIGGER

The following additional syntax restrictions apply to UPDATE statements that
occur within the body of a CREATE TRIGGER statement.

  • The table-name specified as part of an UPDATE
    statement within
    a trigger body must be unqualified. In other words, the
    schema-name. prefix on the table name of the UPDATE is
    not allowed within triggers. Unless the table to which the trigger
    is attached is in the TEMP database, the table being updated by the
    trigger program must reside in the same database as it. If the table
    to which the trigger is attached is in the TEMP database, then the
    unqualified name of the table being updated is resolved in the same way
    as it is for a top-level statement (by searching first the TEMP database,
    then the main database, then any other databases in the order they were
    attached).

  • The INDEXED BY and NOT INDEXED clauses are not allowed on UPDATE
    statements within triggers.

  • The LIMIT and ORDER BY clauses for UPDATE are unsupported within
    triggers, regardless of the compilation options used to build SQLite.

Software Licenses

The SQLite source code is in the
public domain,
and is free for use
by anyone and for any purpose. No license is required. However, some
users desire a license so that they can have warranty of title, or just
because their company lawyers say they need one. A
perpetual license
and warranty of title
for the core SQLite source code is available for this purpose.

The
SQLite Encryption
Extension (SEE),
the ZIPVFS Extension,
and the Compressed and
Encrypted ReadOnly Database (CEROD) extension are enhanced versions
of SQLite that handle encrypted
and/or compressed databases. SEE can read and write encrypted databases.
SEE encrypts all database content, including metadata, so that the database
file appears as white noise. ZIPVFS
compresses the database on-the-fly using application-supplied
compression and decompression functions.
CEROD reads a compressed database that is
also optionally encrypted. All of SEE, ZIPVFS, and CEROD are
supplied in source code form only; the licensee is responsible for
compiling the products for their chosen platform. It is not difficult
to compile any of these extension. All products come in the form of an
amalgamated source file
named «sqlite3.c». So compiling SEE, ZIPVFS, or CEROD into an application
is simply a matter of substituting the SEE-, ZIPVFS-, or CEROD-enabled sqlite3.c
source file in place of the public-domain sqlite3.c source file and recompiling.
Licenses for SEE, ZIPVFS, and CEROD are perpetual.
All three extension can read and write ordinary,
uncompressed and unencrypted database files.

Obtaining The Code

If you do not want to use Fossil, you can download tarballs or ZIP
archives or as follows:

  • Lastest trunk check-in as
    Tarball,
    ZIP-archive, or
    SQLite-archive.

  • Latest release as
    Tarball,
    ZIP-archive, or
    SQLite-archive.

  • For other check-ins, substitute an appropriate branch name or
    tag or hash prefix in place of «release» in the URLs of the previous
    bullet. Or browse the timeline
    to locate the check-in desired, click on its information page link,
    then click on the «Tarball» or «ZIP Archive» links on the information
    page.

If you do want to use Fossil to check out the source tree,
first install Fossil version 2.0 or later.
(Source tarballs and precompiled binaries available
here. Fossil is
a stand-alone program. To install, simply download or build the single
executable file and put that file someplace on your $PATH.)
Then run commands like this:

After setting up a repository using the steps above, you can always
update to the lastest version using:

Or type «fossil ui» to get a web-based user interface.

7.5. Importing CSV files

Use the «.import» command to import CSV (comma separated value) data into
an SQLite table. The «.import» command takes two arguments which are the
source from which CSV data is to be read and the name of the
SQLite table into which the CSV data is to be inserted. The source argument
is the name of a file to be read or, if it begins with a «|» character,
specifies a command which will be run to produce the input CSV data.

Note that it is important to set the «mode» to «csv» before running the
«.import» command. This is necessary to prevent the command-line shell
from trying to interpret the input file text as some other format.

sqlite> .import C:/work/somedata.csv tab1

There are two cases to consider: (1) Table «tab1» does not previously
exist and (2) table «tab1» does already exist.

In the first case, when the table does not previously exist, the table is
automatically created and the content of the first row of the input CSV
file is used to determine the name of all the columns in the table. In
other words, if the table does not previously exist, the first row of the
CSV file is interpreted to be column names and the actual data starts on
the second row of the CSV file.

For the second case, when the table already exists, every row of the
CSV file, including the first row, is assumed to be actual content. If
the CSV file contains an initial row of column labels, you can cause
the .import command to skip that initial row using the «—skip 1» option.

SQLite Studio – Manager and Administration

There are lots of SQLite management tools that make working with SQLite databases easier. Instead of creating and managing databases using a command line, these tools provide a set of GUI tools that let you create and manage the database.

The official SQLite website has dozens of such tools listed; you can view them from here: SQLite Management Tools. Here is the recommended one

SQLite Studio: It is a portable tool that doesn’t require an installation. It supports both SQLite3 and SQLite2. You can easily import and export data to various formats like CSV, HTML, PDF, JSON. Its open source and supports Unicode.

2.3. Optional LIMIT and ORDER BY Clauses

If SQLite is built with the
compile-time option then the syntax of the UPDATE statement is extended
with optional ORDER BY and LIMIT clauses as follows:

WITHRECURSIVEcommon-table-expression,UPDATEORROLLBACKqualified-table-nameORREPLACEORIGNOREORFAILORABORTSETcolumn-name-list=exprcolumn-name,FROMtable-or-subquery,join-clauseWHEREexprreturning-clauseORDERBYordering-term,LIMITexprOFFSETexpr,expr

If an UPDATE statement has a LIMIT clause, the maximum number of rows that
will be updated is found by evaluating the accompanying expression and casting
it to an integer value. A negative value is interpreted as «no limit».

If the LIMIT expression evaluates to non-negative value N and the
UPDATE statement has an ORDER BY clause, then all rows that would be updated in
the absence of the LIMIT clause are sorted according to the ORDER BY and the
first N updated. If the UPDATE statement also has an OFFSET clause,
then it is similarly evaluated and cast to an integer value. If the OFFSET
expression evaluates to a non-negative value M, then the first M
rows are skipped and the following N rows updated instead.

If the UPDATE statement has no ORDER BY clause, then all rows that
would be updated in the absence of the LIMIT clause are assembled in an
arbitrary order before applying the LIMIT and OFFSET clauses to determine
which are actually updated.

The ORDER BY clause on an UPDATE statement is used only to determine which
rows fall within the LIMIT. The order in which rows are modified is arbitrary
and is not influenced by the ORDER BY clause.

How It All Fits Together

SQLite is modular in design.
See the architectural description
for details. Other documents that are useful in
(helping to understand how SQLite works include the
file format description,
the virtual machine that runs
prepared statements, the description of
how transactions work, and
the overview of the query planner.

Years of effort have gone into optimizating SQLite, both
for small size and high performance. And optimizations tend to result in
complex code. So there is a lot of complexity in the current SQLite
implementation. It will not be the easiest library in the world to hack.

Key files:

  • sqlite.h.in — This file defines the public interface to the SQLite
    library. Readers will need to be familiar with this interface before
    trying to understand how the library works internally.

  • sqliteInt.h — this header file defines many of the data objects
    used internally by SQLite. In addition to «sqliteInt.h», some
    subsystems have their own header files.

  • parse.y — This file describes the LALR(1) grammar that SQLite uses
    to parse SQL statements, and the actions that are taken at each step
    in the parsing process.

  • vdbe.c — This file implements the virtual machine that runs
    prepared statements. There are various helper files whose names
    begin with «vdbe». The VDBE has access to the vdbeInt.h header file
    which defines internal data objects. The rest of SQLite interacts
    with the VDBE through an interface defined by vdbe.h.

  • where.c — This file (together with its helper files named
    by «where*.c») analyzes the WHERE clause and generates
    virtual machine code to run queries efficiently. This file is
    sometimes called the «query optimizer». It has its own private
    header file, whereInt.h, that defines data objects used internally.

  • btree.c — This file contains the implementation of the B-Tree
    storage engine used by SQLite. The interface to the rest of the system
    is defined by «btree.h». The «btreeInt.h» header defines objects
    used internally by btree.c and not published to the rest of the system.

  • pager.c — This file contains the «pager» implementation, the
    module that implements transactions. The «pager.h» header file
    defines the interface between pager.c and the rest of the system.

  • os_unix.c and os_win.c — These two files implement the interface
    between SQLite and the underlying operating system using the run-time
    pluggable VFS interface.

  • shell.c.in — This file is not part of the core SQLite library. This
    is the file that, when linked against sqlite3.a, generates the
    «sqlite3.exe» command-line shell. The «shell.c.in» file is transformed
    into «shell.c» as part of the build process.

  • tclsqlite.c — This file implements the Tcl bindings for SQLite. It
    is not part of the core SQLite library. But as most of the tests in this
    repository are written in Tcl, the Tcl language bindings are important.

  • test*.c — Files in the src/ folder that begin with «test» go into
    building the «testfixture.exe» program. The testfixture.exe program is
    an enhanced Tcl shell. The testfixture.exe program runs scripts in the
    test/ folder to validate the core SQLite code. The testfixture program
    (and some other test programs too) is build and run when you type
    «make test».

  • ext/misc/json1.c — This file implements the various JSON functions
    that are build into SQLite.

There are many other source files. Each has a succinct header comment that
describes its purpose and role within the larger system.

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *

Adblock
detector