PostgreSQL | MySQL version control engine. Applies effects deliberately.
Migrate your PostgreSQL or MySQL database from Erlang code with no effort. This amazing toolkit has one and only purpose - consistently upgrade database schema, using Erlang stack and plain SQL. Feel free to run it with any PostgreSQL/MySQL Erlang library (and see several ready-to-use examples below). As an extra - do this in "no side-effects" mode.
Just call pure_migrations:migrate/3
(see specification here), providing:
Path
to migration scripts folder (strictly and incrementally enumerated).FTx
transaction handlerFQuery
database queries execution handlerMigration logic is idempotent and could be executed multiple times against the same database with the same migration scripts set. Moreover, it is safe to migrate your database concurrently (as a part of nodes startup in scalable environments and if you providing proper transaction handler). Please see verified integrations and live code snippets below.
All integrations validated against PostgreSQL 9.4/9.6
Database dialect | Library | Example |
---|---|---|
postgres | epgsql/epgsql:4.2.1 | epgsql test |
postgres | semiocast/pgsql:v26.0.2 | spgsql test |
postgres | processone/p1_pgsql:1.1.6 | p1pgsql test |
mysql | mysql-otp/mysql-otp:1.4.0 | otp_mysql test |
postgres | any library with basic sql functional | generic test |
Sure! Please follow these simple steps below:
make local
.Library production code has no third-party dependencies at all. Code becomes extremely lightweight and decoupled from particular library bottlenecks. User absolutely free to choose any implementation (maybe one of validated ones) and it's version as well.
The approach could be expressed as 2 rules:
This model gives much more clarity for migrations sequence. And there is no chance to interlace migrations accidentally. Which is the case if multiple migrations are being developed and merged to default branch simultaneously.
Conn = ?config(conn, Opts),
MigrationCall =
pure_migrations:migrate(
"scripts/folder/path",
fun(F) -> epgsql:with_transaction(Conn, fun(_) -> F() end) end,
fun(Q) ->
case epgsql:squery(Conn, Q) of
{ok, [
{column, <<"version">>, _, _, _, _, _},
{column, <<"filename">>, _, _, _, _, _}], Data} ->
[{list_to_integer(binary_to_list(BinV)), binary_to_list(BinF)} || {BinV, BinF} <- Data];
{ok, [{column, <<"max">>, _, _, _, _, _}], [{null}]} -> -1;
{ok, [{column, <<"max">>, _, _, _, _, _}], [{N}]} ->
list_to_integer(binary_to_list(N));
[{ok, _, _}, {ok, _}] -> ok;
{ok, _, _} -> ok;
{ok, _} -> ok;
Default -> Default
end
end),
...
%% more preparation steps if needed
...
%% migration call
ok = MigrationCall(),
Also see examples from live epgsql integration tests here
Conn = ?config(conn, Opts),
MigrationCall =
pure_migrations:migrate(
"scripts/folder/path",
fun(F) ->
pgsql_connection:simple_query("BEGIN", Conn),
try F() of
Res ->
pgsql_connection:simple_query("COMMIT", Conn),
Res
catch
_:Problem ->
pgsql_connection:simple_query("ROLLBACK", Conn),
{rollback, Problem}
end
end,
fun(Q) ->
case pgsql_connection:simple_query(Q, Conn) of
{{select, 0}, []} -> [];
{{select, 1}, Data = [{_V, _F}|_]} ->
[{V, binary_to_list(BinF)} || {V, BinF} <- Data];
{{select, 1}, [{null}]} -> -1;
{{select, 1}, [{N}]} -> N;
{{insert, 0, 1}, []} -> ok;
{{create, table},[]} -> ok;
{error, Details} -> {error, Details};
_ -> ok
end
end),
...
%% more preparation steps if needed
...
%% migration call
ok = MigrationCall(),
Also see examples from live semiocast/pgsql integration tests here
Conn = ?config(conn, Opts),
MigrationCall =
pure_migrations:migrate(
"scripts/folder/path",
fun(F) ->
pgsql:squery(Conn, "BEGIN"),
try F() of
Res ->
pgsql:squery(Conn, "COMMIT"),
Res
catch
_:Problem ->
pgsql:squery(Conn, "ROLLBACK"),
{rollback, Problem}
end
end,
fun(Q) ->
case pgsql:squery(Conn, Q) of
{ok, [{error, Details}]} -> {error, Details};
{ok, [{_, [
{"version", text, _, _, _, _, _},
{"filename", text, _, _, _, _, _}], Data}]} ->
[{list_to_integer(V), F} || [V, F] <- Data];
{ok, [{"SELECT 1", [{"max", text, _, _, _, _, _}], [[null]]}]} -> -1;
{ok, [{"SELECT 1", [{"max", text, _, _, _, _, _}], [[N]]}]} ->
list_to_integer(N);
{ok, _} -> ok
end
end),
...
%% more preparation steps if needed
...
%% migration call
ok = MigrationCall(),
Also see examples from live epgsql integration tests here
Conn = ?config(conn, Opts),
MigrationCall =
pure_migrations:migrate(
"scripts/folder/path",
fun(F) ->
%% no full-scope tx API available here
%% alternatively use mysql:transaction/2, but please be aware about
%% mysql implicit transactions commit behavior
try F() of
Res -> Res
catch
_:Problem -> {rollback_unavailable, Problem}
end
end,
fun(Q) ->
case mysql:query(Conn, Q) of
{error, Details} -> {error, Details};
{ok,[<<"version">>,<<"filename">>],[]} -> [];
{ok,[<<"version">>,<<"filename">>], Data} ->
[{V, binary_to_list(F)} || [V, F] <- Data];
{ok,[<<"max(version)">>],[[null]]} -> -1;
{ok,[<<"max(version)">>],[[V]]} -> V;
{ok, _} -> ok;
ok -> ok
end
end),
...
%% more preparation steps if needed
...
%% migration call
ok = MigrationCall(),
Also see examples from live epgsql integration tests here
Oh, there is more! Library implemented in the way, that all side-effects either externalized or deferred explicitly. Reasons are quite common:
There are 2 externalized kind of effects:
Other effects (like file operations) are deferred in bulk with outcome like:
This trick is quite useful if someone would like to compose two functions without their actual execution (or without their application, alternatively speaking). This pretty standard routine may look like below (Scala or Kotlin+Arrow):
val divideByTwo = (number : Int) => number / 2;
val addThree = (number: Int) => number + 3;
val composed = addThree compose divideByTwo
Simplistic Erlang version:
compose(F1, F2) -> fun() -> F2(F1()) end.
There area few places in library with clear need to compose function A and another function B inside deferred execution context. Specifics is that A supplies list of objects, and B should be applied to each of them. Sounds like some functor B to be applied to A output, when this output is being wrapped into future execution context. Two cases of this appeared in library:
%% Map/1 call here produces new context (defferred function call)
map(Generate, Map) -> fun() -> [Map(R) || R <- Generate()] end.
flatten(Generate) -> fun() -> [ok = R() || R <- Generate()], ok end.
Partial application is very useful, in case if not all function arguments known yet. Or maybe there is deliberate decision to pass some of arguments later on. Again, in Scala it may look like:
val add = (a: Int, b: Int) => a + b
val partiallyApplied = add(3, _)
Library code has very simplistic partial application, done for exact arguments number (although it is easy to generalize it for arguments, represented as list):
Partial = fun(V_F) -> do_migration(Path, FQuery, V_F) end,
MIT, see LICENSE for more details.