[{"updates":[],"title":"Equivalence Between Integer Comparison and Disjointness Testing of \"Prefix\" Sets","slug":"comp","summary":"comp","image":"","tags":["comp"],"content":"

The idea in this post is due to Dov Gordon who shared with me in late 2016 at George Mason University in connection with some crypto work we were doing. Dov has allowed me to share it here. It might be something that is already out there; I'm not sure, but since I found it particularly interesting and quite relevant to the crypto work we were\/are doing, I will describe it here. First I will give some formal definitions, then prove the theorem which is the crux of this post, and finally I will include some Python code that illustrates the idea.<\/p>\n

\n1. <\/b>The function $\\mathsf{bin}_m : \\mathbb{N}_0 \\to \\{0, 1\\}^{m}$<\/code> indexed by an integer $m$<\/code> maps a nonnegative integer to a a binary string of length $m$<\/code> corresponding to its $m$<\/code>-bit binary representation and is defined as $\\mathsf{bin}_m(x) = b_{m - 1}b_{m - 2}\\cdots b_0$<\/code> where $b_i := \\lfloor x \/ 2^i \\rfloor \\mod{2}$<\/code>.\n<\/div>\n\n2. <\/b>The function $\\mathsf{int} : \\{0, 1\\}^+ \\to \\mathbb{N}_0$<\/code> maps a non-empty binary string to a nonnegative integer corresponding to the numerical value represented by the binary string and is defined as $\\mathsf{int}(b_{k - 1} b_{k - 2} \\cdots b_0) \\mapsto \\sum_{0 \\leq i < k} b_i \\cdot 2^{i}$<\/code>.\n<\/div>\n\n3. <\/b>The prefix set<\/em> associated with a nonnegative integer is the set containing all non-empty substrings of its corresponding binary string. For a nonnegative $n$<\/code>-bit integer $x \\in \\mathbb{N}_0$<\/code>, the prefix set $P_x$<\/code> is defined as $$P_x := \\{b_{n}\\cdots b_{n - i} \\in \\{0, 1\\}^{+} : 0 \\leq i \\leq n\\}$$<\/code> where $b_nb_{n - 1}\\cdots b_0 = \\mathsf{bin}_{n + 1}(x)$<\/code>.\nThe set of prefix sets is denoted by $\\mathbb{P} := \\{P_x : x \\in \\mathbb{N}_0, x < 2^n\\}$<\/code> and the function $\\mathsf{pre} : \\mathbb{N}_0 \\to \\mathbb{P}$<\/code> is a function that maps nonnegative integers to their associated prefix set; that is, $\\mathsf{pre}(x) = P_x$<\/code>, where $P_x$<\/code> is defined as above for nonnegative integer $x$<\/code>.\n<\/div>\n\n4. <\/b>The function $\\mathsf{inc} : \\{0, 1\\}^+ \\to \\{0, 1\\}^+$<\/code> maps a binary string representing integer $x$<\/code> to the binary string representing integer $x + 1$<\/code> and is defined as\n$\\mathsf{inc}(s) = \\mathsf{bin}_{|s|}(\\mathsf{int}(s) + 1)$<\/code> where the notation $|\\cdot|$<\/code> denotes the length of a binary string. By abuse of notation, we write $\\mathsf{inc}(P_x) = \\{\\mathsf{inc}(s) : s \\in P_x\\}$<\/code> to apply $\\mathsf{inc}$<\/code> to every prefix in a prefix set.\n<\/div>\n\n5. <\/b>The binary relation $\\preceq$<\/code> is defined on the set of prefix sets $\\mathbb{P}$<\/code> as follows:\n$$P_x \\preceq P_y := P_x \\cap \\mathsf{inc}(P_y) = \\emptyset$$<\/code>.\n<\/div>\n\n1. <\/b>The map $\\mathsf{pre}$<\/code> is a relation-preserving isomorphism between $(\\{x \\in \\mathbb{N}_0 : x < 2^n\\}, \\leq)$<\/code> and $(\\mathbb{P}, \\preceq)$<\/code>.<\/i>\n<\/div>\n\nWe first prove that for nonnegative integers $x, y < 2^n$<\/code>: $x \\leq y \\implies P_x \\preceq P_y$<\/code>.\nFor the case of $x = y$<\/code>, both prefix sets are equal so adding one to the prefixes in one of the sets results in a disjoint set of prefixes. Now for the case $x < y$<\/code>. In this case, there exists an $i$<\/code> such that bit $i$<\/code> is the first bit from left to right where $x$<\/code> and $y$<\/code> differ and it will be one for $y$<\/code> and zero for $x$<\/code>. The set of prefixes with less than $i$<\/code> bits is equal for $P_x$<\/code> and $P_y$<\/code>, so adding one to the prefixes in this subset of $P_y$<\/code> will result in a subset disjoint from that of $P_x$<\/code>. Adding one to the set of prefixes in $P_y$<\/code> with $i$<\/code> or more bits results in a disjoint set to the set of prefixes in $P_x$<\/code> with $i$<\/code> or more bits. It remains to prove the other direction; that is, $P_x \\preceq P_y \\implies x \\leq y$<\/code>. If the prefix sets after adding one to the prefixes in $P_y$<\/code> are disjoint, there are two possibilities. First, the original sets $P_x$<\/code> and $P_y$<\/code> are equal, therefore $x = y$<\/code>. Secondly, the first bit that $x$<\/code> and $y$<\/code> differ from left to right, position $i \\geq 1$<\/code>, is one for $y$<\/code> and zero for $x$<\/code> as otherwise adding one to the prefix with $i$<\/code> bits such that bit $i$<\/code> is zero would result in a prefix that is equal to that in $P_x$<\/code> and therefore we would not have that $P_x \\preceq P_y$<\/code> (the sets would not be disjoint). Therefore, it follows that $x < y$<\/code> and we can conclude the proof.<\/p>\n<\/div>\nHere is the code in python to generate the prefix sets and check the $\\preceq$<\/code> relation between them.<\/p>\ndef to_bin(v, nbits):\n bin_rep =  * nbits\n i = nbits - 1\n while v != 0 and i >= 0:\n bin_rep[i] = v % 2\n v \/= 2\n i -= 1\n return bin_rep\n\ndef from_bin_str(bstr):\n bits = map(int, list(bstr))\n i = len(bits) - 1\n v = 0\n power = 1\n while i >= 0:\n v += bits[i] * power\n power *= 2\n i -= 1\n return v\n\ndef add_one_to_prefix(prefix):\n nbits = len(prefix)\n v = from_bin_str(prefix) + 1\n if v > 2**nbits - 1:\n nbits += 1\n return ''.join(map(str, to_bin(v, nbits)))\n\ndef gen_prefixes(v, nbits):\n bin_rep_s = map(str, to_bin(v, nbits))\n prefixes = set()\n for i in range(nbits):\n prefix = ''.join(bin_rep_s[:i + 1])\n prefixes.add(prefix)\n return prefixes\n\ndef lte_prefix(a, b, nbits):\n a_prefixes = gen_prefixes(a, nbits)\n b_prefixes = gen_prefixes(b, nbits)\n new_b_prefixes = set()\n for prefix in b_prefixes:\n new_b_prefixes.add(add_one_to_prefix(prefix))\n\n return len(a_prefixes & new_b_prefixes) == 0<\/code><\/pre>\nTo support $n$<\/code> bits, set the variable $\\mathsf{nbits}$<\/code> to $n + 1$<\/code>, call the function $\\mathsf{lte\\_prefix}$<\/code> to evaluate comparision between two integers which is implemented by using the $\\preceq$<\/code> relation.<\/p>","raw":"The idea in this post is due to Dov Gordon who shared with me in late 2016 at George Mason University in connection with some crypto work we were doing. Dov has allowed me to share it here. It might be something that is already out there; I'm not sure, but since I found it particularly interesting and quite relevant to the crypto work we were\/are doing, I will describe it here. First I will give some formal definitions, then prove the theorem which is the crux of this post, and finally I will include some Python code that illustrates the idea.\n\n!definition{def:bin}{The function $\\mathsf{bin}_m : \\mathbb{N}_0 \\to \\{0, 1\\}^{m}$ indexed by an integer $m$ maps a nonnegative integer to a a binary string of length $m$ corresponding to its $m$-bit binary representation and is defined as $\\mathsf{bin}_m(x) = b_{m - 1}b_{m - 2}\\cdots b_0$ where $b_i := \\lfloor x \/ 2^i \\rfloor \\mod{2}$.\n}\n\n!definition{def:int}{The function $\\mathsf{int} : \\{0, 1\\}^+ \\to \\mathbb{N}_0$ maps a non-empty binary string to a nonnegative integer corresponding to the numerical value represented by the binary string and is defined as $\\mathsf{int}(b_{k - 1} b_{k - 2} \\cdots b_0) \\mapsto \\sum_{0 \\leq i < k} b_i \\cdot 2^{i}$.\n}\n\n!definition{def:prefixset}{The *prefix set* associated with a nonnegative integer is the set containing all non-empty substrings of its corresponding binary string. For a nonnegative $n$-bit integer $x \\in \\mathbb{N}_0$, the prefix set $P_x$ is defined as $$P_x := \\{b_{n}\\cdots b_{n - i} \\in \\{0, 1\\}^{+} : 0 \\leq i \\leq n\\}$$ where $b_nb_{n - 1}\\cdots b_0 = \\mathsf{bin}_{n + 1}(x)$.\nThe set of prefix sets is denoted by $\\mathbb{P} := \\{P_x : x \\in \\mathbb{N}_0, x < 2^n\\}$ and the function $\\mathsf{pre} : \\mathbb{N}_0 \\to \\mathbb{P}$ is a function that maps nonnegative integers to their associated prefix set; that is, $\\mathsf{pre}(x) = P_x$, where $P_x$ is defined as above for nonnegative integer $x$.\n}\n\n!definition{def:inc}{The function $\\mathsf{inc} : \\{0, 1\\}^+ \\to \\{0, 1\\}^+$ maps a binary string representing integer $x$ to the binary string representing integer $x + 1$ and is defined as\n$\\mathsf{inc}(s) = \\mathsf{bin}_{|s|}(\\mathsf{int}(s) + 1)$ where the notation $|\\cdot|$ denotes the length of a binary string. By abuse of notation, we write $\\mathsf{inc}(P_x) = \\{\\mathsf{inc}(s) : s \\in P_x\\}$ to apply $\\mathsf{inc}$ to every prefix in a prefix set.\n}\n\n!definition{def:relation}{The binary relation $\\preceq$ is defined on the set of prefix sets $\\mathbb{P}$ as follows:\n$$P_x \\preceq P_y := P_x \\cap \\mathsf{inc}(P_y) = \\emptyset$$.\n}\n\n!theorem{th:isom}{The map $\\mathsf{pre}$ is a relation-preserving isomorphism between $(\\{x \\in \\mathbb{N}_0 : x < 2^n\\}, \\leq)$ and $(\\mathbb{P}, \\preceq)$.}\n!proof{\nWe first prove that for nonnegative integers $x, y < 2^n$: $x \\leq y \\implies P_x \\preceq P_y$. \nFor the case of $x = y$, both prefix sets are equal so adding one to the prefixes in one of the sets results in a disjoint set of prefixes. Now for the case $x < y$. In this case, there exists an $i$ such that bit $i$ is the first bit from left to right where $x$ and $y$ differ and it will be one for $y$ and zero for $x$. The set of prefixes with less than $i$ bits is equal for $P_x$ and $P_y$, so adding one to the prefixes in this subset of $P_y$ will result in a subset disjoint from that of $P_x$. Adding one to the set of prefixes in $P_y$ with $i$ or more bits results in a disjoint set to the set of prefixes in $P_x$ with $i$ or more bits. It remains to prove the other direction; that is, $P_x \\preceq P_y \\implies x \\leq y$. If the prefix sets after adding one to the prefixes in $P_y$ are disjoint, there are two possibilities. First, the original sets $P_x$ and $P_y$ are equal, therefore $x = y$. Secondly, the first bit that $x$ and $y$ differ from left to right, position $i \\geq 1$, is one for $y$ and zero for $x$ as otherwise adding one to the prefix with $i$ bits such that bit $i$ is zero would result in a prefix that is equal to that in $P_x$ and therefore we would not have that $P_x \\preceq P_y$ (the sets would not be disjoint). Therefore, it follows that $x < y$ and we can conclude the proof.\n}\nHere is the code in python to generate the prefix sets and check the $\\preceq$ relation between them.\npython\ndef to_bin(v, nbits):\n bin_rep =  * nbits\n i = nbits - 1\n while v != 0 and i >= 0:\n bin_rep[i] = v % 2\n v \/= 2\n i -= 1\n return bin_rep\n\ndef from_bin_str(bstr):\n bits = map(int, list(bstr))\n i = len(bits) - 1\n v = 0\n power = 1\n while i >= 0:\n v += bits[i] * power\n power *= 2\n i -= 1\n return v\n\ndef add_one_to_prefix(prefix):\n nbits = len(prefix)\n v = from_bin_str(prefix) + 1\n if v > 2**nbits - 1:\n nbits += 1\n return ''.join(map(str, to_bin(v, nbits)))\n\ndef gen_prefixes(v, nbits):\n bin_rep_s = map(str, to_bin(v, nbits))\n prefixes = set()\n for i in range(nbits):\n prefix = ''.join(bin_rep_s[:i + 1])\n prefixes.add(prefix)\n return prefixes\n\ndef lte_prefix(a, b, nbits):\n a_prefixes = gen_prefixes(a, nbits)\n b_prefixes = gen_prefixes(b, nbits)\n new_b_prefixes = set()\n for prefix in b_prefixes:\n new_b_prefixes.add(add_one_to_prefix(prefix))\n \n return len(a_prefixes & new_b_prefixes) == 0\n\nTo support $n$ bits, set the variable $\\mathsf{nbits}$ to $n + 1$, call the function $\\mathsf{lte\\_prefix}$ to evaluate comparision between two integers which is implemented by using the $\\preceq$ relation.\n","timestamp":1594891599,"edited":1594891599},{"updates":[],"title":"Open Problem (Theoretical Cryptography): Simultaneously Achieving Both Anonymity and Homomorphism in Additively-Homomorphic IBE","slug":"anonprob","summary":"anonprob","image":"","tags":["anonprob"],"content":"Before I delve into this open problem, I want to mention that I'm happy I have the new features that I've recently added to this blog (see my previous two posts) available to me such as LaTeX-style theorem enviornments and bibliography\/references because I think I'm going to need them here, although I'm just going to start writing and see what happens.<\/p>\nNow it is worth pointing out that the open problem here is most likely not of much general interest, even in cryptography, since it is very narrowly-focused and theoretical. It relates to a primitive known as identity-based group homomrophic encryption (IBGHE) which is defined in my PKC paper  along with some of my other papers such as  and . Basically, IBGHE is identity-based encryption that is homomorphic for some group operation and the ciphertext space for every identity forms a group. Moreover, the decryption function is a group homomorphism between the ciphertext group and the plaintext group. The aforementioned papers describe some of the applications of group homomorphic encryption (GHE).<\/p>\nIt is an open problem to construct an IBGHE that is simultaneously anonymous and homomorphic for addition. There are only two IBGHE schemes that support modular addition to the best of my knowledge, namely my XOR-homomorphic variant of the Cocks IBE scheme in  and my IBGHE scheme from  that is homomorphic for addition modulo smooth square-free integers. Now Joye discovered that the Cocks IBE scheme itself is XOR-homomorphic  but the scheme is not an IBGHE since the ciphertext space with the homomorphic operation forms a quasigroup and not a group. Some readers might wonder about schemes that are considered multiplicatively homomorphic, which allow addition in the exponent, and question why we do not classify them as IBGHE schemes for addition. The reason is that the corresponding additive group has exponential order and decryption can only recover messages using Pollard's lambda algorithm that are less than some polynomial bound so the valid<\/em> message space does not form an additive group. Now the two IBGHE schemes supporting modulo addition that we are aware of are not anonymous (see my post<\/a> on breaking anonymity in the scheme from ). But there are variants of these schemes that achieve anonymity. However, although these schemes gain anonymity, they lose the homomorphic property. Most usually, we need to know the identity associated with a ciphertext in order to correctly compute the homomorphic operation and so when the identity is hidden to us as it is when the scheme is anonymous, we cannot compute the homomorphic operation. So the open problem here in a nutshell is to construct an IBGHE for addition that is anonymous while retaining the homomorphic operation. This is mainly of theoretical interest.<\/p>\nI would be satisified with any progress in this direction. Even when indistinguishability obfuscation (iO) is employed, we still do not know how to solve this problem. In fact, it is proved in my paper  that assuming iO, given public-key GHE for some group, we get IBGHE for the same group. However, the transformation does not give us anonymity. I have an outline of a paradigm for solving this problem that makes use of the result in  from iO. The paradigm I have in mind is as follows. As part of the public parameters, we have an obfuscated program that maps an identity to a public key in some multi-user<\/em> system with public parameters. The public keys in a multi-user<\/em> system share the same set of common public parameters - think of the generator $g$<\/code> and modulus $p$<\/code> in ElGamal as the common public parameters, except ElGamal is of no use here since it is only multiplicatively homomorphic. Nevertheless, ElGamal serves to illustrate another property that this paradigm requires, namely that the multi-user system supports key privacy where key privacy can be viewed as the analog to anonymity in the identity-based setting; that is, the ciphertexts in the multi-user system do not reveal the public key they are associated with, which is the case in ElGamal. I'm using the term multi-user system in a broad sense here permitting both the case where we have a trusted authority and the case where we do not. In the former, the public parameters are generated by a trusted authority with a backdoor (master secret key) such that the trusted authority can decrypt any ciphertext. In our paradigm, the public parameters of the multi-user system will be generated by the trusted authority of the IBE scheme and published as part of the IBE scheme's public parameters. So we need the multi-user system to be key-private and additively homomorphic, where the homomorphic operation can be computed without knowing the public key associated with a ciphertext. Then additionally assuming iO, we get a solution to the problem, albeit with strong assumptions.<\/p>\nReferences<\/h2>\n\n\nAdditively Homomorphic IBE from Higher Residuosity<\/i>. Michael Clear and Ciaran McGoldrick. Public Key Cryptography. 2019. \n<\/li>\n\nA Note on Attribute-Based Group Homomorphic Encryption<\/i>.Michael Clear and Ciaran McGoldrick. Cryptology ePrint Archive, Report 2017\/752. 2017. \n<\/li>\n\nHomomorphic Encryption with Access Policies: Characterization and New Constructions<\/i>. Michael Clear, Arthur Hughes and Hitesh Tewari. AFRICACRYPT. 2013. \n<\/li>\n\nIdentity-Based Cryptosystems and Quadratic Residuosity.<\/i>. Marc Joye. Public Key Cryptography (1). 2016. \n<\/li>\n<\/ol>\n","raw":"Before I delve into this open problem, I want to mention that I'm happy I have the new features that I've recently added to this blog (see my previous two posts) available to me such as LaTeX-style theorem enviornments and bibliography\/references because I think I'm going to need them here, although I'm just going to start writing and see what happens.\n\nNow it is worth pointing out that the open problem here is most likely not of much general interest, even in cryptography, since it is very narrowly-focused and theoretical. It relates to a primitive known as identity-based group homomrophic encryption (IBGHE) which is defined in my PKC paper !cite{conf\/pkc\/ClearM19} along with some of my other papers such as !cite{ClearM17} and !cite{conf\/africacrypt\/ClearHT13}. Basically, IBGHE is identity-based encryption that is homomorphic for some group operation and the ciphertext space for every identity forms a group. Moreover, the decryption function is a group homomorphism between the ciphertext group and the plaintext group. The aforementioned papers describe some of the applications of group homomorphic encryption (GHE).\n\nIt is an open problem to construct an IBGHE that is simultaneously anonymous and homomorphic for addition. There are only two IBGHE schemes that support modular addition to the best of my knowledge, namely my XOR-homomorphic variant of the Cocks IBE scheme in !cite{conf\/africacrypt\/ClearHT13} and my IBGHE scheme from !cite{conf\/pkc\/ClearM19} that is homomorphic for addition modulo smooth square-free integers. Now Joye discovered that the Cocks IBE scheme itself is XOR-homomorphic !cite{conf\/pkc\/Joye16} but the scheme is not an IBGHE since the ciphertext space with the homomorphic operation forms a quasigroup and not a group. Some readers might wonder about schemes that are considered multiplicatively homomorphic, which allow addition in the exponent, and question why we do not classify them as IBGHE schemes for addition. The reason is that the corresponding additive group has exponential order and decryption can only recover messages using Pollard's lambda algorithm that are less than some polynomial bound so the *valid* message space does not form an additive group. Now the two IBGHE schemes supporting modulo addition that we are aware of are not anonymous (see [my post](http:\/\/ciphron.netsoc.ie\/blog?post=anon) on breaking anonymity in the scheme from !cite{conf\/pkc\/ClearM19}). But there are variants of these schemes that achieve anonymity. However, although these schemes gain anonymity, they lose the homomorphic property. Most usually, we need to know the identity associated with a ciphertext in order to correctly compute the homomorphic operation and so when the identity is hidden to us as it is when the scheme is anonymous, we cannot compute the homomorphic operation. So the open problem here in a nutshell is to construct an IBGHE for addition that is anonymous while retaining the homomorphic operation. This is mainly of theoretical interest.\n\nI would be satisified with any progress in this direction. Even when indistinguishability obfuscation (iO) is employed, we still do not know how to solve this problem. In fact, it is proved in my paper !cite{ClearM17} that assuming iO, given public-key GHE for some group, we get IBGHE for the same group. However, the transformation does not give us anonymity. I have an outline of a paradigm for solving this problem that makes use of the result in !cite{ClearM17} from iO. The paradigm I have in mind is as follows. As part of the public parameters, we have an obfuscated program that maps an identity to a public key in some *multi-user* system with public parameters. The public keys in a *multi-user* system share the same set of common public parameters - think of the generator $g$ and modulus $p$ in ElGamal as the common public parameters, except ElGamal is of no use here since it is only multiplicatively homomorphic. Nevertheless, ElGamal serves to illustrate another property that this paradigm requires, namely that the multi-user system supports key privacy where key privacy can be viewed as the analog to anonymity in the identity-based setting; that is, the ciphertexts in the multi-user system do not reveal the public key they are associated with, which is the case in ElGamal. I'm using the term multi-user system in a broad sense here permitting both the case where we have a trusted authority and the case where we do not. In the former, the public parameters are generated by a trusted authority with a backdoor (master secret key) such that the trusted authority can decrypt any ciphertext. In our paradigm, the public parameters of the multi-user system will be generated by the trusted authority of the IBE scheme and published as part of the IBE scheme's public parameters. So we need the multi-user system to be key-private and additively homomorphic, where the homomorphic operation can be computed without knowing the public key associated with a ciphertext. Then additionally assuming iO, we get a solution to the problem, albeit with strong assumptions.","timestamp":1594324510,"edited":1594324510},{"updates":[{"title":"Code available on GitHub","slug":"uberlight#2","summary":"github","image":"","tags":["uberlight"],"content":"The code for uberlight is now available here<\/a>.<\/p>","raw":"The code for uberlight is now available [here](https:\/\/github.com\/ciphron\/hyperlight).\n","timestamp":1589529783,"edited":1589529783},{"title":"Example","slug":"uberlight#1","summary":"uberlight example","image":"","tags":["uberlight"],"content":"An example update.<\/p>","raw":"An example update.\n","timestamp":1586676803,"edited":1586676803}],"title":"Uberlight: Minimalistic Flat-file Blog Engine Written in PHP","slug":"uberlight","summary":"uberlight","image":"","tags":["uberlight"],"content":"Uberlight is fork of a minimalistic flat-file blog engine written in PHP called hyperlight, which you can find on GitHub<\/a>. My desire for my own blog was to even have a more minimalistic engine, so I forked hyperlight and stripped down some of the features, changed others and added more to suit my needs. Although the theme can be easily changed, my particular taste favors a simple style as you can see here. But even after stripping down hyperlight to a certain extent, there were features I wanted to add. So in this post, I describe some of the new features and how to use them. Firstly though, I will briefly describe how posts are written in uberlight, preserved exactly from hyerlight.<\/p>\nUberlight has a config file which specifies a 'posts' directory. To write a post, a Markdown file is added to the posts directory with the extension '.md'. The prefix (i.e. the string preceding the extension and referred to in hyperlight as 'slug') is the unique label that is used to locate the post. The file starts with the title of the post. The next line is a description of the post. Another line below describes a series of optional tags, separated by commas, which are only used internally in uberlight, which differs significantly from their purpose in hyperlight. The body of the post file is written in Markdown and provides the content of the post. One feature I have added is a way to obtain the raw content of the post in Markdown, so a link is given at the end of every post which does exactly that. You can find it at the end of this post and inspect the raw Markdown form if you so wish.<\/p>\nNow we turn to the main new features I have added in uberlight, which accomplishes things that I would personally like to do in my own posts.<\/p>\nMath Expressions in TeX<\/h1>\nWe would like to write math expressions in TeX. I use the MathJax Javascript library to render mathematical notation written in TeX for display in the browser. So to write a TeX math expression inline in the markdown file, we use backticks to enclose it in a code block, and then within the backticks we write the TeX expression using the usual delimiter, namely a $symbol. For example, consider the following sentence.<\/p>\nLet$n \\in \\mathbb{Z}^+$<\/code> be a positive integer.<\/p>\nNow the markdown to produce the above is:<\/p>\nLet $n \\in \\mathbb{Z}^+$ be a positive integer.<\/code><\/pre>\nHowever there is still some work to do. The code blocks within the produced HTML mean that MathJax will ignore the TeX. Furthermore, the dollar sign delimiters need to be changed to \\ ( and \\ ). Note there is actually no space between the backslash and the parenthesis but here we need top stop MathJax processing the text. Luckily, without having to reinvent the wheel, Yihui Xie solved this problem in his blog post here<\/a> with a piece of Javascript that preprocesses the code tags, changes the delimiters for the TeX expressions and then removes the code tags. So I use his code in uberlight, and you can find it here<\/a>. With this script added, we can comfortably use TeX expressions in our Markdown file, like we have done above.<\/p>\nUpdates<\/h1>\nOne feature I wanted to add was a means to extend existing blog posts with updates, which can be added over time and appear at the end of the post. The way I accomplish this in uberlight is with additional files in the posts directory with specific names. Suppose you have post labeled 'example' such that there is a Markdown file in the posts directory with the name example.md. Now suppose we wish to add an update to such a post. To do this, we add a file whose name is the label of the post (in this case, 'example') followed by a hash symbol and an arbitrary string (which might describe the particular update) - then naturally the '.md' extension. This file has the same form as a post and the content is written in Markdown. An update will then be added to the post. The updates are ordered by modification date of the files, starting with the oldest towards the most recent. To illustrate this, I have added an update to this post by writing a file called 'uberlight#1.md' to the posts directory and you can see the result below at the end of the main post.<\/p>\nExtended Markdown<\/h1>\nOne of the motivations for developing an extension to Markdown was a desire to have a way to automatically and easily add theorems and proofs that are styled in a similar manner to LateX. Such 'environments' are not supported by MathJax, which only deals with math expressions. Furthermore, I thought it would be nice for the theorems etc. to be numbered automatically with an easy way to reference them throughout a post. For my extension to Markdown, read my post describing it here<\/a>.<\/p>\nNote as of writing, I have not as yet put the code on GitHub. I will do this very soon and add an update to this post to say that it has been made available.<\/p>","raw":"Uberlight is fork of a minimalistic flat-file blog engine written in PHP called hyperlight, which you can find [on GitHub](https:\/\/github.com\/TomboFry\/hyperlight). My desire for my own blog was to even have a more minimalistic engine, so I forked hyperlight and stripped down some of the features, changed others and added more to suit my needs. Although the theme can be easily changed, my particular taste favors a simple style as you can see here. But even after stripping down hyperlight to a certain extent, there were features I wanted to add. So in this post, I describe some of the new features and how to use them. Firstly though, I will briefly describe how posts are written in uberlight, preserved exactly from hyerlight.\n\nUberlight has a config file which specifies a 'posts' directory. To write a post, a Markdown file is added to the posts directory with the extension '.md'. The prefix (i.e. the string preceding the extension and referred to in hyperlight as 'slug') is the unique label that is used to locate the post. The file starts with the title of the post. The next line is a description of the post. Another line below describes a series of optional tags, separated by commas, which are only used internally in uberlight, which differs significantly from their purpose in hyperlight. The body of the post file is written in Markdown and provides the content of the post. One feature I have added is a way to obtain the raw content of the post in Markdown, so a link is given at the end of every post which does exactly that. You can find it at the end of this post and inspect the raw Markdown form if you so wish.\n\nNow we turn to the main new features I have added in uberlight, which accomplishes things that I would personally like to do in my own posts.\n\n# Math Expressions in TeX\nWe would like to write math expressions in TeX. I use the MathJax Javascript library to render mathematical notation written in TeX for display in the browser. So to write a TeX math expression inline in the markdown file, we use backticks to enclose it in a code block, and then within the backticks we write the TeX expression using the usual delimiter, namely a$ symbol. For example, consider the following sentence.\n\nLet $n \\in \\mathbb{Z}^+$ be a positive integer.\n\nNow the markdown to produce the above is:\nmarkdown\nLet $n \\in \\mathbb{Z}^+$ be a positive integer.\n\nHowever there is still some work to do. The code blocks within the produced HTML mean that MathJax will ignore the TeX. Furthermore, the dollar sign delimiters need to be changed to \\\\ ( and \\\\ ). Note there is actually no space between the backslash and the parenthesis but here we need top stop MathJax processing the text. Luckily, without having to reinvent the wheel, Yihui Xie solved this problem in his blog post [here](https:\/\/yihui.org\/en\/2018\/07\/latex-math-markdown) with a piece of Javascript that preprocesses the code tags, changes the delimiters for the TeX expressions and then removes the code tags. So I use his code in uberlight, and you can find it [here](http:\/\/ciphron.netsoc.ie\/blog\/themes\/uberlight\/js\/math_code.js). With this script added, we can comfortably use TeX expressions in our Markdown file, like we have done above.\n\n# Updates\nOne feature I wanted to add was a means to extend existing blog posts with updates, which can be added over time and appear at the end of the post. The way I accomplish this in uberlight is with additional files in the posts directory with specific names. Suppose you have post labeled 'example' such that there is a Markdown file in the posts directory with the name example.md. Now suppose we wish to add an update to such a post. To do this, we add a file whose name is the label of the post (in this case, 'example') followed by a hash symbol and an arbitrary string (which might describe the particular update) - then naturally the '.md' extension. This file has the same form as a post and the content is written in Markdown. An update will then be added to the post. The updates are ordered by modification date of the files, starting with the oldest towards the most recent. To illustrate this, I have added an update to this post by writing a file called 'uberlight#1.md' to the posts directory and you can see the result below at the end of the main post.\n\n# Extended Markdown\nOne of the motivations for developing an extension to Markdown was a desire to have a way to automatically and easily add theorems and proofs that are styled in a similar manner to LateX. Such 'environments' are not supported by MathJax, which only deals with math expressions. Furthermore, I thought it would be nice for the theorems etc. to be numbered automatically with an easy way to reference them throughout a post. For my extension to Markdown, read my post describing it [here](http:\/\/ciphron.netsoc.ie\/blog?post=extmarkdown).\n\nNote as of writing, I have not as yet put the code on GitHub. I will do this very soon and add an update to this post to say that it has been made available.","timestamp":1587600432,"edited":1587600432},{"updates":[{"title":"Code available on GitHub","slug":"extmarkdown#1","summary":"github","image":"","tags":["extmarkdown"],"content":"The code for the ExtMarkdown parser (written in PHP) is available now on GitHub as part of the uberlight blog engine. See the repository here<\/a>.<\/p>","raw":"The code for the ExtMarkdown parser (written in PHP) is available now on GitHub as part of the uberlight blog engine. See the repository [here](https:\/\/github.com\/ciphron\/hyperlight).\n","timestamp":1589530199,"edited":1589530199}],"title":"Extension to Markdown Supporting LateX-Style Theorem Environments, References and Citations","slug":"extmarkdown","summary":"extmarkdown","image":"","tags":["extmarkdown"],"content":"Extended Markdown - New Commands<\/h1>\nOne of the motivations for developing an extension to Markdown was a desire to have a way to automatically and easily add theorems and proofs that are styled in a similar manner to LateX. Such 'environments' are not supported by MathJax, which only deals with math expressions. Furthermore, I thought it would be nice for the theorems etc. to be numbered automatically with an easy way to reference them throughout a post. To accommodate such new features, I chose a syntax to specify commands in a Markdown file. A command is signalled by an exclamation (!) mark followed by the alphanumeric name of the command. If the command takes arguments, each argument is enclosed in braces and follows immediately after the name of the command (this is in fact modelled after LateX). So to invoke the theorem command, take a look at the code below. The arguments of the theorem command are first, the label of the theorem (so that it can easily be referenced) and second, the content of the theorem. So we write:<\/p>\n!theorem{th:primes}{There are an infinite number of primes.}<\/code><\/pre>\nIt is rendered as<\/p>\n\n1. <\/b>There are an infinite number of primes.<\/i>\n<\/div>\nI call this extension to Markdown, rather lazily, ExtMarkdown, which is not very unique considering the many extensions to Markdown out there. Also we have commands for definitions, lemmas and proofs. Then there is a reference (!ref) command whose only argument is the label of the theorem, lemma or definition to be referenced. Here is some elementary mathematical content that serves to illustrate the commands.<\/p>\n\n1. <\/b>A semigroup consists of a set together with a binary operation satisfying the following properties.\nThe set is closed under the binary operation.<\/li>\nThe binary operation is associative.<\/li>\n<\/ol>\n<\/div>\nNow we have the following theorem.<\/p>\n\n2. <\/b>The set $S := \\{x^2 + y^2 : x, y \\in \\mathbb{Z}^+\\}$<\/code> together with the binary operation of integer multiplication is a semigroup.<\/i>\n<\/div>\n\nAssociativity follows immediately from that of integer multiplication. However, we must show that the set is closed under multiplication. This can be seen as follows $$(x_1^2 + y_1^2)\\cdot(x_2^2 + y_2^2) = x_1^2x_2^2 + x_1^2y_2^2 + x_2^2 y_1^2 + y_1^2y_2^2 = x_1^2x_2^2 + x_1^2y_2^2 + x_2^2y_1^2 + y_1^2y_2^2 + 2x_1x_2y_1y_2 - 2x_1x_2y_1y_2 = (x_1x_2 + y_1y_2)^2 + (x_1y_2 - x_2y_1)^2$$<\/code>. Therefore, $(S, \\ast)$<\/code> is a semigroup as it satisfies the properties of Definition 1.\n<\/div>\nNote that we used a '!ref' command in the proof of Theorem 2 to reference Definition 1 (and twice in this very sentence). See the raw ExtMarkdown by following the link at the end of the post.<\/p>\nThe CSS I use for the theorem etc. environments is from a post by Zach Harmony<\/a>.<\/p>\nCitations<\/h1>\nAnother useful command that I've added is '!cite' for citation of papers etc. It takes one argument, namely a BibTeX key, which locates an entry in a BibTeX bibliography file that is specified in the uberlight configuration. A list of references is automatically generated and appended to the end of the post. Updates can cite articles that are cited in the main post, and if they make citations to articles not cited in the main post, a list of 'further' references is appended to the update. To illustrate citations, I will now arbitrarily cite .<\/p>\nNote as of writing, I have not as yet put the code on GitHub. I will do this very soon and add an update to this post to say that it has been made available.<\/p>\nReferences<\/h2>\n\n\nScargos: Towards Automatic Vulnerability Distribution<\/i>. Florian Rhinow and Michael Clear. SECRYPT. 2015. \n<\/li>\n<\/ol>\n","raw":"\n# Extended Markdown - New Commands\nOne of the motivations for developing an extension to Markdown was a desire to have a way to automatically and easily add theorems and proofs that are styled in a similar manner to LateX. Such 'environments' are not supported by MathJax, which only deals with math expressions. Furthermore, I thought it would be nice for the theorems etc. to be numbered automatically with an easy way to reference them throughout a post. To accommodate such new features, I chose a syntax to specify commands in a Markdown file. A command is signalled by an exclamation (!) mark followed by the alphanumeric name of the command. If the command takes arguments, each argument is enclosed in braces and follows immediately after the name of the command (this is in fact modelled after LateX). So to invoke the theorem command, take a look at the code below. The arguments of the theorem command are first, the label of the theorem (so that it can easily be referenced) and second, the content of the theorem. So we write:\nmarkdown\n!theorem{th:primes}{There are an infinite number of primes.}\n\nIt is rendered as\n!theorem{th:primes}{There are an infinite number of primes.}\n\nI call this extension to Markdown, rather lazily, ExtMarkdown, which is not very unique considering the many extensions to Markdown out there. Also we have commands for definitions, lemmas and proofs. Then there is a reference (\\!ref) command whose only argument is the label of the theorem, lemma or definition to be referenced. Here is some elementary mathematical content that serves to illustrate the commands.\n\n!definition{def:semigroup}{A semigroup consists of a set together with a binary operation satisfying the following properties.\n1. The set is closed under the binary operation.\n2. The binary operation is associative.\n}\nNow we have the following theorem.\n!theorem{th:sum_two_squares}{\nThe set $S := \\{x^2 + y^2 : x, y \\in \\mathbb{Z}^+\\}$ together with the binary operation of integer multiplication is a semigroup.\n}\n!proof{\nAssociativity follows immediately from that of integer multiplication. However, we must show that the set is closed under multiplication. This can be seen as follows $$(x_1^2 + y_1^2)\\cdot(x_2^2 + y_2^2) = x_1^2x_2^2 + x_1^2y_2^2 + x_2^2 y_1^2 + y_1^2y_2^2 = x_1^2x_2^2 + x_1^2y_2^2 + x_2^2y_1^2 + y_1^2y_2^2 + 2x_1x_2y_1y_2 - 2x_1x_2y_1y_2 = (x_1x_2 + y_1y_2)^2 + (x_1y_2 - x_2y_1)^2$$. Therefore, $(S, \\ast)$ is a semigroup as it satisfies the properties of Definition !ref{def:semigroup}.\n}\n\nNote that we used a '\\!ref' command in the proof of Theorem !ref{th:sum_two_squares} to reference Definition !ref{def:semigroup} (and twice in this very sentence). See the raw ExtMarkdown by following the link at the end of the post.\n\nThe CSS I use for the theorem etc. environments is from a [post by Zach Harmony](http:\/\/drz.ac\/2013\/01\/17\/latex-theorem-like-environments-for-the-web\/).\n\n# Citations\nAnother useful command that I've added is '\\!cite' for citation of papers etc. It takes one argument, namely a BibTeX key, which locates an entry in a BibTeX bibliography file that is specified in the uberlight configuration. A list of references is automatically generated and appended to the end of the post. Updates can cite articles that are cited in the main post, and if they make citations to articles not cited in the main post, a list of 'further' references is appended to the update. To illustrate citations, I will now arbitrarily cite !cite{conf\/secrypt\/RhinowC15}.\n\nNote as of writing, I have not as yet put the code on GitHub. I will do this very soon and add an update to this post to say that it has been made available.","timestamp":1586676512,"edited":1586676512},{"updates":[{"title":"Update to geom2d.Plane Interface","slug":"flower#1","summary":"flower","image":"","tags":["flower"],"content":"Since this post was written, I have modified the geom2d.Plane interface on GitHub so there are some minor differences between the current interface and the code in the post that uses it. Also, a new implementation of geom2d.Plane for Tkinter has been added. See the repositories on GitHub.<\/p>","raw":"Since this post was written, I have modified the geom2d.Plane interface on GitHub so there are some minor differences between the current interface and the code in the post that uses it. Also, a new implementation of geom2d.Plane for Tkinter has been added. See the repositories on GitHub.","timestamp":1583635922,"edited":1583635922}],"title":"Overlapping Circles and the Flower of Life","slug":"flower","summary":"flower","image":"","tags":["flower"],"content":"My intention in this post is to discuss how to alogorithmically draw some geometric patterns that I find strikingly beautiful including the well-known Flower of Life. I will not comment on any significance such patterns may hold for certain people or how they have been used and interpreted. The focus here is on geometry and code. Once again, like my most recent posts, we will be using Python. First though I want to introduce a helpful building block that simplifies the process and allows us to focus more on the higher-level aspects.<\/p>\nThe geom2d module<\/h2>\nI have written a simple module called 'geom2d' and put it on GitHub<\/a>. At the heart of geom2d is an abstraction called a Plane that allows you to draw basic geometric shapes to a 2D surface using floating-point coordinates. Here is the interface.<\/p>\nclass Plane(object):\n def __init__(self, width, height):\n self._width = width\n self._height = height\n\n @property\n def width(self):\n return self._width\n\n @property\n def height(self):\n return self._height\n\n def draw_point(self, point, color):\n pass\n\n def draw_line(self, start, end, color):\n pass\n\n def draw_circle(self, circle, color):\n pass\n\n # start and end are normalized angles on the unit interval\n # moving in an anticlockwise direction\n def draw_arc(self, center, radius, start, end, color):\n pass<\/code><\/pre>\nThe GitHub respository<\/a> contains some useful implementations of Plane. For vector graphics, there is SVGPlane. And for rasters, there is RasterPlane, which requires a Raster object (an implementation of Raster is provided: ImageRaster).<\/p>\nTringular Lattice Form of Overlapping Circles<\/h2>\nNow we turn to the common triangular lattice form of overlapping circles. We have a sequence of patterns of concentric hexagonal rings of circles. Each pattern in the sequence has a complete hexagonal ring of circles and contains $C(n)$<\/code> circles where the sequence $C(n)$<\/code> is 1, 1, 7, 19, 37, 61... (starting with $n = 0$<\/code>); that is, in general, we have $C(n) = 3n(n - 1) + 1$<\/code>. So how do we draw such patterns?<\/p>\nThe construction is relatively straightforward. We begin with a center point $(c_x, c_y)$<\/code> and radius $r$<\/code> for the pattern. The circles then have radius $r' = r \/ (n - 1)$<\/code>. Naturally, we have to handle the case $n \\in \\{0, 1\\}$<\/code> separately and return a single circle centered at the given center point with radius $r$<\/code>. For $n >= 2$<\/code>, here is how we proceed. Note that we omit specifying the radius for the circles since each circle we draw will implicitly have radius $r'$<\/code> We start off with two circles, the first with center $(c_x, c_y)$<\/code> and the second with center $(c_x, c_y - r')$<\/code>. So our list of circles contains both of these. Then we rely on a crucial function called perform_intersections. This function takes a list of circles, a centre $(c'_x, c'_y)$<\/code> and a bounding radius $b$<\/code>. It finds the intersection points of every circle in the list with every other circle in the list and if the distance between each point and $(c'_x, c'_y)$<\/code> is less than or equal to $b$<\/code>, it adds a circle to the list centered on that point. Then the process is repeated with the new circles until no more circles can be added. Here is the code, which admittedly needs some tidying up and optmization, but you will get a sketch of the algorithm.<\/p>\ndef perform_intersections(circles, center, radius):\n if len(circles) == 0:\n return\n\n circle_radius = circles.radius\n points = set(map(lambda c: c.center, circles))\n index = 1\n\n while index < len(circles):\n sel_circle = circles[index]\n for i in range(index):\n other_circle = circles[i]\n inter = sel_circle.intersection(other_circle)\n for p in inter:\n pr = Point(round(p.x, PRECISION_DIGITS),\n round(p.y, PRECISION_DIGITS))\n if not is_near_identical(pr, points) \n and pr.distance_to(center) <= radius:\n circle = Circle(pr, circle_radius)\n circles.append(circle)\n points.add(pr)\n index += 1<\/code><\/pre>\nNow we have completed the workhorse of the algorithm to draw these patterns. Our function to produce the patterns, which we name overlapping_circles, locates the pattern in local space with center $(0, 0)$<\/code> and sets the radius of the circles to $1.0$<\/code> so that it can be easily scaled and translated as required. We proceed by adding our two initial circles to the list as described above with radius $1.0$<\/code> and then we call perform_intersections with thist list of circles, center point $(0, 0)$<\/code> and bounding radius $n - 1$<\/code> (plus some epsilon to take care of floating point errors). Here is the code.<\/p>\ndef overlapping_circles(n):\n if n < 0:\n return []\n\n if n == 0 or n == 1:\n return [Circle(Point(0, 0), 1.0)]\n\n init_circles = []\n center = Point(0, 0)\n radius = 1.0\n circle1 = Circle(center, radius)\n init_circles.append(circle1)\n\n circle2 = Circle(Point(center.x, center.y - radius), radius)\n init_circles.append(circle2)\n\n circles = list(init_circles)\n perform_intersections(circles, center, (n - 1) + FP_THRESHOLD))\n return circles<\/code><\/pre>\nObserve that calling overlapping_circles with argument $n$<\/code>, we obtain a list of circles $L_n$<\/code> such that $\\mathsf{len}(L_n) = C(n)$<\/code>. And we get the desired result. We can then iterate through the list of circles and draw each one to the Plane using the method Plane.draw_circle. Here is an example of an image generated by the program which shows overlapping circles for $n = 5$<\/code>.\n <\/p>\nThe Flower of Life<\/h2>\nWe have a lot of the hard work done. The function to create the flower of life is very simple. The fower of life consists of an outer circle with radius 3.0 in local space and 19 "full circles" inside the outer circle. Then the remaining circles we term "surrounding circles" and they intersect with the full circles and we have to draw arcs at the points of intersection inside the full circles. So put simply, the flower of life pattern in our code is a pair consisting of firstly a list of full circles, and secondly, a list of surrounding circles. We leverage overlapping_circles to obtain these circles. The full circles are obtained by calling overlapping_circles with $n = 3$<\/code> and the surrounding circles are obtained by calling overlapping_circles with $n = 5$<\/code> and then filtering out the full circles. We note that it is easy to generalize this code and extend the pattern to a greater number of circles, but here we just stick to the common flower of life pattern. The code is simple.<\/p>\ndef create_flower_of_life():\n full_circles = overlapping_circles(3)\n all_circles = overlapping_circles(5)\n surr_circles = list(set(all_circles).difference(set(full_circles)))\n return (full_circles, surr_circles)<\/code><\/pre>\nThe next challenge is to draw these circles to the plane. The tricky part is the arcs. The first point I want to make is to recall the prototype of Plane.draw_arc. This function is passed a center point and a radius, which describes a particular circle, along with parameters called start and end. The parameters start and end are normalized angles in the interval $[0, 1]$<\/code>. So 0 corresponds to zero radians and 1.0 corresponds to $2\\pi$<\/code> radians. And the function draws the arc in an anti-clockwise direction. Therefore, if we have start=0 and end=0.25, we will get an arc around the upper-right quadrant of the circle. However, if we have start=0.25 and end=0, we will get an arc around the other three quadrants of the circle.<\/p>\nFirst though, we will look at the high-level code to draw the flower of life.<\/p>\ndef draw_flower_of_life2(plane, flower_pattern, center, radius, color):\n full_circles, surr_circles = flower_pattern\n\n circle_radius = radius \/ 3.0\n\n outer_circle = Circle(center, radius)\n plane.draw_circle(outer_circle, color)\n\n def transform(circle):\n circle_center = Point(circle.center.x*circle_radius + center.x,\n circle.center.y*circle_radius + center.y)\n return Circle(circle_center, circle_radius)\n\n full_circles = map(transform, full_circles)\n surr_circles = map(transform, surr_circles)\n\n for circle in full_circles:\n plane.draw_circle(circle, color)\n\n for circle in surr_circles:\n for full_circle in full_circles:\n draw_circle_filtered(plane, circle, full_circle, color)<\/code><\/pre>\nObserve that for the surrounding cicles, we iterate through the list of full ciircles and pass each one as an argument to a function called draw_circle_filtered. So the next step is to inspect this function and see what it does. In this function, the circle argument is the surrounding circle and the enclosing_circle argument is the full circle that we have passed in the function above.<\/p>\nThe draw_circle_filtered function looks for intersection points between the circle argument and the enclosing_circle argument. If it finds none, it simply returns. But if two intersection points are found, it proceeds to work out the normalized angles from where the intersection points are on the circle. Then it has to figure out the order i.e. which one is start and which one is end. There is helper function called is_arc_outside_circle (which you can see in the repository on GitHub) that determines this order. Finally, we invoke Plane.draw_arc wit the correct start and end values. Here is the code.<\/p>\ndef draw_circle_filtered(plane, circle, enclosing_circle, color):\n inter = enclosing_circle.intersection(circle)\n if len(inter) != 2:\n return\n else:\n rangles = []\n for p in inter:\n # Work out the angle from where the intersection point\n # is on the circle\n rad = acos((p.x - circle.center.x) \/ circle.radius)\n\n # Convert this angle in radians to a normalized angle in\n # the interval [0, 1]\n rangle = 0\n if p.y > circle.center.y:\n rangle = 0.5 + ((pi - rad) \/ (2*pi))\n else:\n rangle = rad \/ (2*pi)\n rangles.append(rangle)\n if is_arc_outside_circle(circle.center, circle.radius,\n rangles, rangles, enclosing_circle):\n rangles.reverse()\n plane.draw_arc(circle.center, circle.radius, rangles,\n rangles, color)<\/code><\/pre>\nNow I want to remark that there is room for plenty of optimization in various parts of the code and at times simplicity is favored over efficiecy. But now, we have completed all the steps to algorithmically draw the traingular lattice form of overlapping circles in addition to the flower of life. Putting it all together, we have code that sets up two planes: a raster plane corresponding to our raster, namely ImageRaster (where will will produce a PNG) and an SVGPlane. Here is a code sniipet.<\/p>\nflower_pattern = create_flower_of_life()\n\nplanes = []\n\nimage_raster = ImageRaster(400, 300, geom2d.WHITE)\nimage_plane = RasterPlane(PLANE_WIDTH, PLANE_HEIGHT, image_screen)\nplanes.append(image_plane)\n\nsvg_plane = SVGPlane(PLANE_WIDTH, PLANE_HEIGHT, 'floweroflife.svg')\nplanes.append(svg_plane)\n\nfor plane in planes:\n draw_flower_of_life(plane, flower_pattern,\n Point(PLANE_WIDTH \/ 2.0, PLANE_HEIGHT \/ 2.0),\n FLOWER_RADIUS, geom2d.BLACK)\n\nimage_raster.save('floweroflife.png')\nsvg_plane.save()<\/code><\/pre>\nYou can find all the code in the GitHub respository<\/a>. Finally, here is the SVG this code produces of the flower of life.\n <\/p>\nHere is the PNG for completeness.\n <\/p>","raw":"My intention in this post is to discuss how to alogorithmically draw some geometric patterns that I find strikingly beautiful including the well-known Flower of Life. I will not comment on any significance such patterns may hold for certain people or how they have been used and interpreted. The focus here is on geometry and code. Once again, like my most recent posts, we will be using Python. First though I want to introduce a helpful building block that simplifies the process and allows us to focus more on the higher-level aspects.\n\n## The geom2d module\nI have written a simple module called 'geom2d' and put it [on GitHub](https:\/\/github.com\/ciphron\/geom2d). At the heart of geom2d is an abstraction called a Plane that allows you to draw basic geometric shapes to a 2D surface using floating-point coordinates. Here is the interface.\npython\nclass Plane(object):\n def __init__(self, width, height):\n self._width = width\n self._height = height\n\n @property\n def width(self):\n return self._width\n\n @property\n def height(self):\n return self._height\n\n def draw_point(self, point, color):\n pass\n\n def draw_line(self, start, end, color):\n pass\n\n def draw_circle(self, circle, color):\n pass\n\n # start and end are normalized angles on the unit interval\n # moving in an anticlockwise direction\n def draw_arc(self, center, radius, start, end, color):\n pass\n\nThe [GitHub respository](https:\/\/github.com\/ciphron\/geom2d) contains some useful implementations of Plane. For vector graphics, there is SVGPlane. And for rasters, there is RasterPlane, which requires a Raster object (an implementation of Raster is provided: ImageRaster).\n\n## Tringular Lattice Form of Overlapping Circles\nNow we turn to the common triangular lattice form of overlapping circles. We have a sequence of patterns of concentric hexagonal rings of circles. Each pattern in the sequence has a complete hexagonal ring of circles and contains $C(n)$ circles where the sequence $C(n)$ is 1, 1, 7, 19, 37, 61... (starting with $n = 0$); that is, in general, we have $C(n) = 3n(n - 1) + 1$. So how do we draw such patterns?\n\nThe construction is relatively straightforward. We begin with a center point $(c_x, c_y)$ and radius $r$ for the pattern. The circles then have radius $r' = r \/ (n - 1)$. Naturally, we have to handle the case $n \\in \\{0, 1\\}$ separately and return a single circle centered at the given center point with radius $r$. For $n >= 2$, here is how we proceed. Note that we omit specifying the radius for the circles since each circle we draw will implicitly have radius $r'$ We start off with two circles, the first with center $(c_x, c_y)$ and the second with center $(c_x, c_y - r')$. So our list of circles contains both of these. Then we rely on a crucial function called perform_intersections. This function takes a list of circles, a centre $(c'_x, c'_y)$ and a bounding radius $b$. It finds the intersection points of every circle in the list with every other circle in the list and if the distance between each point and $(c'_x, c'_y)$ is less than or equal to $b$, it adds a circle to the list centered on that point. Then the process is repeated with the new circles until no more circles can be added. Here is the code, which admittedly needs some tidying up and optmization, but you will get a sketch of the algorithm.\npython\ndef perform_intersections(circles, center, radius):\n if len(circles) == 0:\n return\n\n circle_radius = circles.radius\n points = set(map(lambda c: c.center, circles))\n index = 1\n \n while index < len(circles):\n sel_circle = circles[index]\n for i in range(index):\n other_circle = circles[i]\n inter = sel_circle.intersection(other_circle)\n for p in inter:\n pr = Point(round(p.x, PRECISION_DIGITS),\n round(p.y, PRECISION_DIGITS))\n if not is_near_identical(pr, points) \\\n and pr.distance_to(center) <= radius:\n circle = Circle(pr, circle_radius)\n circles.append(circle)\n points.add(pr)\n index += 1\n\nNow we have completed the workhorse of the algorithm to draw these patterns. Our function to produce the patterns, which we name overlapping_circles, locates the pattern in local space with center $(0, 0)$ and sets the radius of the circles to $1.0$ so that it can be easily scaled and translated as required. We proceed by adding our two initial circles to the list as described above with radius $1.0$ and then we call perform_intersections with thist list of circles, center point $(0, 0)$ and bounding radius $n - 1$ (plus some epsilon to take care of floating point errors). Here is the code.\npython\ndef overlapping_circles(n):\n if n < 0:\n return []\n\n if n == 0 or n == 1:\n return [Circle(Point(0, 0), 1.0)]\n\n init_circles = []\n center = Point(0, 0)\n radius = 1.0\n circle1 = Circle(center, radius)\n init_circles.append(circle1)\n\n circle2 = Circle(Point(center.x, center.y - radius), radius)\n init_circles.append(circle2)\n\n circles = list(init_circles)\n perform_intersections(circles, center, (n - 1) + FP_THRESHOLD))\n return circles\n\nObserve that calling overlapping_circles with argument $n$, we obtain a list of circles $L_n$ such that $\\mathsf{len}(L_n) = C(n)$. And we get the desired result. We can then iterate through the list of circles and draw each one to the Plane using the method Plane.draw_circle. Here is an example of an image generated by the program which shows overlapping circles for $n = 5$.\n![Overlapping circles for n = 5](http:\/\/ciphron.netsoc.ie\/blog\/images\/ocircles.png \"Overlapping circles for n = 5\")\n\n\n## The Flower of Life\nWe have a lot of the hard work done. The function to create the flower of life is very simple. The fower of life consists of an outer circle with radius 3.0 in local space and 19 \"full circles\" inside the outer circle. Then the remaining circles we term \"surrounding circles\" and they intersect with the full circles and we have to draw arcs at the points of intersection inside the full circles. So put simply, the flower of life pattern in our code is a pair consisting of firstly a list of full circles, and secondly, a list of surrounding circles. We leverage overlapping_circles to obtain these circles. The full circles are obtained by calling overlapping_circles with $n = 3$ and the surrounding circles are obtained by calling overlapping_circles with $n = 5$ and then filtering out the full circles. We note that it is easy to generalize this code and extend the pattern to a greater number of circles, but here we just stick to the common flower of life pattern. The code is simple.\npython\ndef create_flower_of_life():\n full_circles = overlapping_circles(3)\n all_circles = overlapping_circles(5)\n surr_circles = list(set(all_circles).difference(set(full_circles)))\n return (full_circles, surr_circles)\n\nThe next challenge is to draw these circles to the plane. The tricky part is the arcs. The first point I want to make is to recall the prototype of Plane.draw_arc. This function is passed a center point and a radius, which describes a particular circle, along with parameters called start and end. The parameters start and end are normalized angles in the interval $[0, 1]$. So 0 corresponds to zero radians and 1.0 corresponds to $2\\pi$ radians. And the function draws the arc in an anti-clockwise direction. Therefore, if we have start=0 and end=0.25, we will get an arc around the upper-right quadrant of the circle. However, if we have start=0.25 and end=0, we will get an arc around the other three quadrants of the circle.\n\nFirst though, we will look at the high-level code to draw the flower of life.\npython\ndef draw_flower_of_life2(plane, flower_pattern, center, radius, color):\n full_circles, surr_circles = flower_pattern\n\n circle_radius = radius \/ 3.0\n\n outer_circle = Circle(center, radius)\n plane.draw_circle(outer_circle, color)\n\n def transform(circle):\n circle_center = Point(circle.center.x*circle_radius + center.x,\n circle.center.y*circle_radius + center.y)\n return Circle(circle_center, circle_radius)\n\n full_circles = map(transform, full_circles)\n surr_circles = map(transform, surr_circles)\n\n for circle in full_circles:\n plane.draw_circle(circle, color)\n\n for circle in surr_circles:\n for full_circle in full_circles:\n draw_circle_filtered(plane, circle, full_circle, color)\n\nObserve that for the surrounding cicles, we iterate through the list of full ciircles and pass each one as an argument to a function called draw_circle_filtered. So the next step is to inspect this function and see what it does. In this function, the circle argument is the surrounding circle and the enclosing_circle argument is the full circle that we have passed in the function above.\n\nThe draw_circle_filtered function looks for intersection points between the circle argument and the enclosing_circle argument. If it finds none, it simply returns. But if two intersection points are found, it proceeds to work out the normalized angles from where the intersection points are on the circle. Then it has to figure out the order i.e. which one is start and which one is end. There is helper function called is_arc_outside_circle (which you can see in the repository on GitHub) that determines this order. Finally, we invoke Plane.draw_arc wit the correct start and end values. Here is the code.\npython\ndef draw_circle_filtered(plane, circle, enclosing_circle, color):\n inter = enclosing_circle.intersection(circle)\n if len(inter) != 2:\n return\n else:\n rangles = []\n for p in inter:\n # Work out the angle from where the intersection point\n # is on the circle\n rad = acos((p.x - circle.center.x) \/ circle.radius)\n\n # Convert this angle in radians to a normalized angle in\n # the interval [0, 1]\n rangle = 0\n if p.y > circle.center.y:\n rangle = 0.5 + ((pi - rad) \/ (2*pi))\n else:\n rangle = rad \/ (2*pi)\n rangles.append(rangle)\n if is_arc_outside_circle(circle.center, circle.radius,\n rangles, rangles, enclosing_circle):\n rangles.reverse()\n plane.draw_arc(circle.center, circle.radius, rangles,\n rangles, color)\n\nNow I want to remark that there is room for plenty of optimization in various parts of the code and at times simplicity is favored over efficiecy. But now, we have completed all the steps to algorithmically draw the traingular lattice form of overlapping circles in addition to the flower of life. Putting it all together, we have code that sets up two planes: a raster plane corresponding to our raster, namely ImageRaster (where will will produce a PNG) and an SVGPlane. Here is a code sniipet.\npython\nflower_pattern = create_flower_of_life()\n\nplanes = []\n\nimage_raster = ImageRaster(400, 300, geom2d.WHITE)\nimage_plane = RasterPlane(PLANE_WIDTH, PLANE_HEIGHT, image_screen)\nplanes.append(image_plane)\n\nsvg_plane = SVGPlane(PLANE_WIDTH, PLANE_HEIGHT, 'floweroflife.svg')\nplanes.append(svg_plane)\n\nfor plane in planes:\n draw_flower_of_life(plane, flower_pattern,\n Point(PLANE_WIDTH \/ 2.0, PLANE_HEIGHT \/ 2.0),\n FLOWER_RADIUS, geom2d.BLACK)\n\n\nimage_raster.save('floweroflife.png')\nsvg_plane.save()\n\nYou can find all the code in the [GitHub respository](https:\/\/github.com\/ciphron\/floweroflife). Finally, here is the SVG this code produces of the flower of life.\n![Flower of Life](http:\/\/ciphron.netsoc.ie\/blog\/images\/floweroflife.svg \"Flower of Life\")\n\nHere is the PNG for completeness.\n![Flower of Life](http:\/\/ciphron.netsoc.ie\/blog\/images\/floweroflife.png \"Flower of Life\")\n","timestamp":1581950734,"edited":1581950734},{"updates":[],"title":"Breaking Anonymity in IBE from Higher Residuosity","slug":"anon","summary":"anon","image":"","tags":["anon"],"content":"In this post I want to address anonymity of an identity-based encryption scheme in my paper from PKC 2019 available here<\/a>. Basically, the assumption presented in the paper to achieve anonymity for my scheme has been recently broken by Zhao et al. in this paper<\/a> by describing a generalized Galbraith test for an IBE scheme, such as that described in my paper (extended from Boneh, Lavigne and Sabin<\/a>), with $e > 2$<\/code>. So the new cryptograhic assumption made in my paper, namely the Special Polynomial Equation Solvability (SPES) assumption, turns out not to be a true. I was taking the wrong approach in the brief cryptanalysis section in the paper. In this post, I will describe the new Galbraith Test of Zhao et al and how it breaks the assumption along with code for it in Sage. Before that, before descrbing SPES, I will recall a definition from my paper but familiarity with the notation and terminology from the paper is advised. <\/p>\nDefinition: Algebraic Equation Set<\/strong> <\/p>\n\nThe algebraic equation set for a ciphertext $c(x) \\in \\hat{C}$<\/code> with respect to $a = H(\\mathsf{id})$<\/code> is derived as follows. The unknowns are the coefficients $z_0, ..., z^{e - 1}$<\/code> of the polynomial $f(x)$<\/code> generated during encryption. Raising $f(x)$<\/code> to the power of $e$<\/code> and reducing according to the equivalence relation $x^e \\equiv a$<\/code> induced by the quotient of the ring $\\mathbb{Z}_N^\\ast[x]\/(x^e - a)$<\/code> yields a set of $e$<\/code> multivariate polynomials in $z_0, ..., z^{e - 1}$<\/code> of degree $e$<\/code>, one for each coefficient of the result. The algebraic equation set is formed by letting the polynomial for the $i$<\/code>-th coefficient of the result equal to $c_i$<\/code> for $i \\in {0, \\ldots, e - 1}$<\/code>. For example, the algebraic equation set for $e = 3$<\/code> is\n\\begin{align} z_0^3 + a z_1^3 + a^2 z_2^3 + 6a z_0 z_1 z_2 = c_0 \\\\\\\\ 3z_0^2 z_1 + 3a z_0 z_2^2 + 3a z_1^2 z_2 = c_1 \\\\\\\\ 3z_0^2 z_2 + 3z_0 z_1^2 + 3a z_1 z_2^2 = c_2 \\end{align}<\/code><\/p>\n<\/blockquote>\nSo basically, SPES is the assumption that checking solvability of the above set of multivariate polynomial equations is hard. This turns out to be an invalid assumption because there is a better way to look at the problem that yields an efficient generalized Galbraith test that breaks anonymity in the Boneh, Lavigne and Sabin (BLS) scheme and related schemes.<\/p>\nGalbraith Test of Zhao et al. for small $e \\ge 2$<\/code><\/h2>\nI suggest reading Zhao et al. first; I will briefly recall the ideas here. Now let $a = H(\\mathsf{id})$<\/code> and consider a ciphertext polynomial $tf(x)^e \\in \\mathbb{Z}_N[x]\/(x^e - a)$<\/code>, which is in simplified (albeit less efficient) BLS form as considered in my paper, where $t \\xleftarrow{\\$} \\mathbb{Z}_N$<\/code> and$f(x)$<\/code> is a random polynomial of degree$e - 1$<\/code> over$\\mathbb{Z}_N$<\/code>. Now consider the power residue symbol over a function field$\\mathbb{F}_p[x]$<\/code>, which we denote as$\\left(\\frac{\\cdot}{\\cdot}\\right)_{e, \\mathbb{F}_p}$<\/code>. The derivation in equation (1) in Section 5.2 of Zhao et al. shows that\n\$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p} = 1 \\mod{\\mathfrak{p}}\$<\/code> where$(\\mathfrak{p})$<\/code> is the ideal defined in Section 3.1 of my PKC paper. The same identity holds over$\\mathbb{F}_q[x]$<\/code> and with the ideal$(\\mathfrak{q})$<\/code>. If we compute the above with a random ciphertext, we will get a random residue symbol. The next question that arises is how to compute$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p}$<\/code>. The trick is to repeatedly apply the general reciprocity law to$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p}$<\/code> until we get a polynomial of degree$1$<\/code> i.e. we have\n\$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p} \\equiv \\left(\\frac{c_p}{\\mathfrak{p}}\\right)_e \\left(\\frac{\\alpha_p}{\\mathfrak{\\Phi(x)}}\\right)_e \\mod{\\mathfrak{p}}\$<\/code> where$\\alpha_p, c_p \\in \\mathbb{F}_p$<\/code> and$\\Phi(x) \\in \\mathbb{F}_p[x]$<\/code> with$\\mathsf{deg}(\\Phi(x)) = 1$<\/code>. The analogous congruence holds for$(\\mathfrak{q})$<\/code>. Of course we will be working over$\\mathbb{Z}_N[x]$<\/code> so when we apply the reciprocity law, we will get$\\gamma_N \\equiv \\alpha_p \\mod{p}$<\/code> (think of the analogous for$q$<\/code>) and$ c_N \\equiv c_p \\mod{p}$<\/code>. So in short, the Galbraith test$G_e$<\/code> for small$e \\ge 2$<\/code> is defined as follows\n\$G_e(a, c(x)) = \\left(\\frac{c_N\\gamma_N}{\\mathfrak{N}}\\right)_e\$<\/code>. This is the familar power residue symbol we compute in our paper (i.e. with the ideal$(\\mathfrak{N})$<\/code>). For random ciphertexts$c(x)$<\/code>, the value$G_e(a, c(x))$<\/code> is conjectured by Zhao et al. to be uniform on the set of residue symbols whereas for a ciphertext that is genreated honestly with identity$\\mathsf{id}$<\/code> (recall that$a = H(\\mathsf{id})$<\/code>) then with all but negligible probability we have that$G_e(a, c(x)) = 1$<\/code>. Therefore, we have an efficient test that breaks anonymity in BLS and related schemes. So now, I will give the code in Sage that implements the Galbraith test and allows you to run it on honest ciphertexts and random ciphertexts and see the results. The variable name conventions follow Zhao et al. You must first call setup_ideal with parameters$N$<\/code>,$e$<\/code> and$\\mu$<\/code> where$\\mu$<\/code> is a generator in$\\mathbb{Z}_N$<\/code>. Because Sage does not implement an efficient algorithm for the residue symbol with composite (in our case, semiprime) ideals, you can only experiment for the moment with toy parameters i.e. small$N$<\/code>. The class BLSAnonTester can be instantiated with$R_{\\mathsf{id}} = H(\\mathsf{id})$<\/code>, prime$e$<\/code> and the ideal generated by setup_ideal. Then you can create honest ciphertexts or random ciphertexts and run the method gt to evaluate the Galbraith test. Now the code:<\/p>\n\ndef setup_ideal(N, e, mu):\n F.<z> = NumberField(cyclotomic_polynomial(e))\n a1 = N*F.ring_of_integers() + (z - mu)*F.ring_of_integers()\n return a1\n\ndef find_params(a, b):\n c = 1\n while a.degree() > 0:\n next_a = b % a\n next_b = a\n\n c *= -1^(a.degree() * b.degree()) * a.lc()^b.degree() * b.lc()^(-a.degree())\n a = next_a\n b = next_b\n gamma = a\n return (c, gamma)\n\ndef rand_inv_elem(ZN):\n r = ZN.random_element()\n while not r.is_unit():\n r = ZN.random_element()\n return r\n\nclass BLSAnonTester(object):\n def __init__(self, Rid, e, a1):\n self.Rid = Rid\n self.e = e\n self.a1 = a1\n self.ZN = Rid.parent()\n R.<x> = ZN['x']\n self.R = R\n self.S = R.quotient(x^e - Rid)\n\n def gen_ct(self, m):\n f = self.S([rand_inv_elem(self.ZN) for i in range(self.e)])\n g = f^self.e\n t = rand_inv_elem(self.ZN)\n while J(t) != m:\n t = rand_inv_elem(self.ZN)\n return t*g\n\n def rand_ct(self):\n return self.S([rand_inv_elem(self.ZN) for i in range(self.e)])\n\n def J(self, t):\n z = self.a1.ring().gen()\n smap = {z^i : i for i in range(e)}\n return smap[self.a1.ring()(t).residue_symbol(self.a1, self.e)]\n\n def gt(self, c):\n c = self.R(c.list())\n modulus = self.R.gen()^self.e - self.Rid\n try :\n cn, gamma = find_params(c, modulus)\n return self.J(cn * gamma)\n except Exception as ee:\n return None\n<\/code><\/pre>\nSo that's it. We can break anonymity in BLS and related schemes. Thanks for reading and Happy Christmas.<\/p>","raw":"In this post I want to address anonymity of an identity-based encryption scheme in my paper from PKC 2019 available [here](https:\/\/eprint.iacr.org\/2019\/062.pdf). Basically, the assumption presented in the paper to achieve anonymity for my scheme has been recently broken by Zhao et al. in [this paper](https:\/\/eprint.iacr.org\/2019\/557.pdf) by describing a generalized Galbraith test for an IBE scheme, such as that described in my paper (extended from [Boneh, Lavigne and Sabin](https:\/\/ptolemy.berkeley.edu\/projects\/truststc\/education\/reu\/13\/Papers\/SabinM_Paper.pdf)), with $e > 2$. So the new cryptograhic assumption made in my paper, namely the Special Polynomial Equation Solvability (SPES) assumption, turns out not to be a true. I was taking the wrong approach in the brief cryptanalysis section in the paper. In this post, I will describe the new Galbraith Test of Zhao et al and how it breaks the assumption along with code for it in Sage. Before that, before descrbing SPES, I will recall a definition from my paper but familiarity with the notation and terminology from the paper is advised. \n\n**Definition: Algebraic Equation Set** \n> The algebraic equation set for a ciphertext $c(x) \\in \\hat{C}$ with respect to $a = H(\\mathsf{id})$ is derived as follows. The unknowns are the coefficients $z_0, ..., z^{e - 1}$ of the polynomial $f(x)$ generated during encryption. Raising $f(x)$ to the power of $e$ and reducing according to the equivalence relation $x^e \\equiv a$ induced by the quotient of the ring $\\mathbb{Z}_N^\\ast[x]\/(x^e - a)$ yields a set of $e$ multivariate polynomials in $z_0, ..., z^{e - 1}$ of degree $e$, one for each coefficient of the result. The algebraic equation set is formed by letting the polynomial for the $i$-th coefficient of the result equal to $c_i$ for $i \\in {0, \\ldots, e - 1}$. For example, the algebraic equation set for $e = 3 is\n \\begin{align}\n z_0^3 + a z_1^3 + a^2 z_2^3 + 6a z_0 z_1 z_2 = c_0 \\\\\\\\\n 3z_0^2 z_1 + 3a z_0 z_2^2 + 3a z_1^2 z_2 = c_1 \\\\\\\\\n 3z_0^2 z_2 + 3z_0 z_1^2 + 3a z_1 z_2^2 = c_2\n \\end{align}\n\nSo basically, SPES is the assumption that checking solvability of the above set of multivariate polynomial equations is hard. This turns out to be an invalid assumption because there is a better way to look at the problem that yields an efficient generalized Galbraith test that breaks anonymity in the Boneh, Lavigne and Sabin (BLS) scheme and related schemes.\n\n## Galbraith Test of Zhao et al. for small e \\ge 2$\nI suggest reading Zhao et al. first; I will briefly recall the ideas here. Now let $a = H(\\mathsf{id})$ and consider a ciphertext polynomial $tf(x)^e \\in \\mathbb{Z}_N[x]\/(x^e - a)$, which is in simplified (albeit less efficient) BLS form as considered in my paper, where $t \\xleftarrow{\\$} \\mathbb{Z}_N$ and $f(x)$ is a random polynomial of degree $e - 1$ over $\\mathbb{Z}_N$. Now consider the power residue symbol over a function field $\\mathbb{F}_p[x]$, which we denote as $\\left(\\frac{\\cdot}{\\cdot}\\right)_{e, \\mathbb{F}_p}$. The derivation in equation (1) in Section 5.2 of Zhao et al. shows that\n\$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p} = 1 \\mod{\\mathfrak{p}}\$ where $(\\mathfrak{p})$ is the ideal defined in Section 3.1 of my PKC paper. The same identity holds over $\\mathbb{F}_q[x]$ and with the ideal $(\\mathfrak{q})$. If we compute the above with a random ciphertext, we will get a random residue symbol. The next question that arises is how to compute $\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p}$. The trick is to repeatedly apply the general reciprocity law to $\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p}$ until we get a polynomial of degree $1$ i.e. we have\n\$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p} \\equiv \\left(\\frac{c_p}{\\mathfrak{p}}\\right)_e \\left(\\frac{\\alpha_p}{\\mathfrak{\\Phi(x)}}\\right)_e \\mod{\\mathfrak{p}}\$ where $\\alpha_p, c_p \\in \\mathbb{F}_p$ and $\\Phi(x) \\in \\mathbb{F}_p[x]$ with $\\mathsf{deg}(\\Phi(x)) = 1$. The analogous congruence holds for $(\\mathfrak{q})$. Of course we will be working over $\\mathbb{Z}_N[x]$ so when we apply the reciprocity law, we will get $\\gamma_N \\equiv \\alpha_p \\mod{p}$ (think of the analogous for $q$) and $\\\nc_N \\equiv c_p \\mod{p}$. So in short, the Galbraith test $G_e$ for small $e \\ge 2$ is defined as follows\n\$G_e(a, c(x)) = \\left(\\frac{c_N\\gamma_N}{\\mathfrak{N}}\\right)_e\$. This is the familar power residue symbol we compute in our paper (i.e. with the ideal $(\\mathfrak{N})$). For random ciphertexts $c(x)$, the value $G_e(a, c(x))$ is conjectured by Zhao et al. to be uniform on the set of residue symbols whereas for a ciphertext that is genreated honestly with identity $\\mathsf{id}$ (recall that $a = H(\\mathsf{id})$) then with all but negligible probability we have that $G_e(a, c(x)) = 1$. Therefore, we have an efficient test that breaks anonymity in BLS and related schemes. So now, I will give the code in Sage that implements the Galbraith test and allows you to run it on honest ciphertexts and random ciphertexts and see the results. The variable name conventions follow Zhao et al. You must first call setup_ideal with parameters $N$, $e$ and $\\mu$ where $\\mu$ is a generator in $\\mathbb{Z}_N$. Because Sage does not implement an efficient algorithm for the residue symbol with composite (in our case, semiprime) ideals, you can only experiment for the moment with toy parameters i.e. small $N$. The class BLSAnonTester can be instantiated with $R_{\\mathsf{id}} = H(\\mathsf{id})$, prime $e$ and the ideal generated by setup_ideal. Then you can create honest ciphertexts or random ciphertexts and run the method gt to evaluate the Galbraith test. Now the code:\npython\n\ndef setup_ideal(N, e, mu):\n F. = NumberField(cyclotomic_polynomial(e))\n a1 = N*F.ring_of_integers() + (z - mu)*F.ring_of_integers()\n return a1\n\ndef find_params(a, b):\n c = 1\n while a.degree() > 0:\n next_a = b % a\n next_b = a\n \n c *= -1^(a.degree() * b.degree()) * a.lc()^b.degree() * b.lc()^(-a.degree())\n a = next_a\n b = next_b\n gamma = a\n return (c, gamma)\n\ndef rand_inv_elem(ZN):\n r = ZN.random_element()\n while not r.is_unit():\n r = ZN.random_element()\n return r\n\nclass BLSAnonTester(object):\n def __init__(self, Rid, e, a1):\n self.Rid = Rid\n self.e = e\n self.a1 = a1\n self.ZN = Rid.parent()\n R. = ZN['x']\n self.R = R\n self.S = R.quotient(x^e - Rid)\n \n def gen_ct(self, m):\n f = self.S([rand_inv_elem(self.ZN) for i in range(self.e)])\n g = f^self.e\n t = rand_inv_elem(self.ZN)\n while J(t) != m:\n t = rand_inv_elem(self.ZN)\n return t*g\n \n def rand_ct(self):\n return self.S([rand_inv_elem(self.ZN) for i in range(self.e)])\n \n def J(self, t):\n z = self.a1.ring().gen()\n smap = {z^i : i for i in range(e)}\n return smap[self.a1.ring()(t).residue_symbol(self.a1, self.e)]\n \n def gt(self, c):\n c = self.R(c.list())\n modulus = self.R.gen()^self.e - self.Rid\n try :\n cn, gamma = find_params(c, modulus)\n return self.J(cn * gamma)\n except Exception as ee:\n return None\n \n\n\n\nSo that's it. We can break anonymity in BLS and related schemes. Thanks for reading and Happy Christmas.","timestamp":1577260285,"edited":1577260285},{"updates":[],"title":"Low-depth Circuits for Computing the Hamming Weight","slug":"hamming","summary":"hamming","image":"","tags":["hamming"],"content":"Suppose we have a binary string with $n$<\/code> bits. In this post, I will look at constructing (Boolean) arithmetic circuits that compute the hamming weight of a bitstring such that the circuits have minimal multiplicative depth. More precisely, we will develop algorithms handling arbitrary $n$<\/code> that generate circuits to compute the hamming weight of a bitstring with $n$<\/code> bits. Our circuits will consist of XOR and AND gates i.e. the Boolean case of arithmetic circuits. I have updated SCDF<\/a> recently to include the algorithms described here and the code can be found on the SCDF github repository<\/a>. So the code here is in Python, leveraging the SCDF framwork.<\/p>\nThe asymmtotic depth of an optimal hamming weight circuit is $O(\\log n)$<\/code>. Both approaches I take here achieve this but have different constants. The first approach, which I will now describe, is less efficient in terms of depth but more efficient in terms of circuit size.<\/p>\nFirst Approach<\/h2>\nOur first approach is probably the natural top-down construction. We use a Wallace-like tree structure that consists of adders. For simplicity, in this first approach, we will only handle an input bitstring whose length is a power of 2. Here is the code.<\/p>\nfrom base import *\nimport arith\nimport utils\n\ndef hamming_weight_add(v):\n n = len(v)\n if not utils.is_power_of_2(n):\n raise 'Length of vector is not a power of 2'\n elems = map(lambda x: [x], v)\n while n >= 2:\n n \/= 2\n for i in range(n):\n elems[i] = arith.ripple_add(elems[2*i], elems[2*i + 1])\n w = elems\n\n return w<\/code><\/pre>\nAs usual with SCDF, we import from the base module, but here we also rely on the arith module and utils module which you can find in the SCDF repo. Now before computing the depth, we reduce the depth of the circuit by running scdf.reduce_depth. We will consider the depth of the circuit computing the most significant bit of the hamming weight. Invariably, we see that this is $2\\log{n} - 1$<\/code> where $n$<\/code> is the number of input bits. However, we can do better than that, and this brings us to our next algorithm.<\/p>\nSecond Approach<\/h2>\nThis next approach is based on the idea that the $i$<\/code>-th bit of the hamming weight can be computed as the XOR of all AND combinations of $2^i$<\/code> input bits. To see this, suppose we have 3 bits; $a, b$<\/code> and $c$<\/code>. Now the hamming weight is the sum of the first 2 bits added to the third bit. Easily we see that bit 0 is the XOR of the 3 bits. Now suppose we add the first two bits. Bit 0 of this sum is the XOR of the first two bits i.e. $a + b$<\/code>. Bit 1 of this sum is the carry of these first two bits i.e. both bits ANDed, $ab$<\/code>. Now we add the third bit. Bit 1 of the final sum is $ab + c\\cdot (a + b) = ab + ac + bc$<\/code>, hence all AND combinations of the input bits. This naturally extends for a greater number of input bits.<\/p>\nOur algorithm builds upon and generalizes an approach by Cetin, Doroz, Sunar and Savas and handles vectors of any length. It builds up the products three at a time, attempting to minimize gates. Here is the code.<\/p>\nfrom base import *\nimport arith\nfrom math import log\nimport utils\n\ndef hamming_weight_direct(v):\n c = v\n result_len = int(log(len(v), 2)) + 1\n print len(v), result_len\n result = [zero] * result_len\n for i in range(result_len):\n s = []\n n = len(c)\n rem = n % 3\n left_over = c[-rem:]\n for j in range(0, n - rem, 3):\n s.append(c[j] + c[j + 1] + c[j + 2])\n if rem != 0:\n s.append(sum(left_over, zero))\n\n result[i] = sum(s, zero)\n\n next_c = []\n\n for j in range(0, n - rem, 3):\n next_c.append(c[j]*c[j + 1] + c[j]*c[j + 2] + c[j + 1]*c[j + 2])\n if rem == 2:\n next_c.append(c[-1]*c[-2])\n for j in range(len(s)):\n t = zero\n for k in range(j + 1, len(s)):\n t += s[j]*s[k]\n next_c.append(t)\n\n c = next_c\n\n return result <\/code><\/pre>\nIf you run scdf.reduce_depth on this circuit followed by scdf.deduplicate to remove duplicate subcircuits then obtain the gate count, you will find that this approach has a higher gate count than the previous approach for the same $n$<\/code>. However, this approach yields a smaller depth of exactly $\\log n$<\/code>.<\/p>","raw":"Suppose we have a binary string with $n$ bits. In this post, I will look at constructing (Boolean) arithmetic circuits that compute the hamming weight of a bitstring such that the circuits have minimal multiplicative depth. More precisely, we will develop algorithms handling arbitrary $n$ that generate circuits to compute the hamming weight of a bitstring with $n$ bits. Our circuits will consist of XOR and AND gates i.e. the Boolean case of arithmetic circuits. I have updated [SCDF](http:\/\/ciphron.netsoc.ie\/blog?post=scdf) recently to include the algorithms described here and the code can be found on the [SCDF github repository](https:\/\/github.com\/ciphron\/scdf). So the code here is in Python, leveraging the SCDF framwork.\n\nThe asymmtotic depth of an optimal hamming weight circuit is $O(\\log n)$. Both approaches I take here achieve this but have different constants. The first approach, which I will now describe, is less efficient in terms of depth but more efficient in terms of circuit size.\n\n## First Approach\nOur first approach is probably the natural top-down construction. We use a Wallace-like tree structure that consists of adders. For simplicity, in this first approach, we will only handle an input bitstring whose length is a power of 2. Here is the code.\npython\nfrom base import *\nimport arith\nimport utils\n\ndef hamming_weight_add(v):\n n = len(v)\n if not utils.is_power_of_2(n):\n raise 'Length of vector is not a power of 2'\n elems = map(lambda x: [x], v)\n while n >= 2:\n n \/= 2\n for i in range(n):\n elems[i] = arith.ripple_add(elems[2*i], elems[2*i + 1])\n w = elems\n\n return w\n\nAs usual with SCDF, we import from the base module, but here we also rely on the arith module and utils module which you can find in the SCDF repo. Now before computing the depth, we reduce the depth of the circuit by running scdf.reduce_depth. We will consider the depth of the circuit computing the most significant bit of the hamming weight. Invariably, we see that this is $2\\log{n} - 1$ where $n$ is the number of input bits. However, we can do better than that, and this brings us to our next algorithm.\n\n## Second Approach\nThis next approach is based on the idea that the $i$-th bit of the hamming weight can be computed as the XOR of all AND combinations of $2^i$ input bits. To see this, suppose we have 3 bits; $a, b$ and $c$. Now the hamming weight is the sum of the first 2 bits added to the third bit. Easily we see that bit 0 is the XOR of the 3 bits. Now suppose we add the first two bits. Bit 0 of this sum is the XOR of the first two bits i.e. $a + b$. Bit 1 of this sum is the carry of these first two bits i.e. both bits ANDed, $ab$. Now we add the third bit. Bit 1 of the final sum is $ab + c\\cdot (a + b) = ab + ac + bc$, hence all AND combinations of the input bits. This naturally extends for a greater number of input bits.\n\nOur algorithm builds upon and generalizes an approach by Cetin, Doroz, Sunar and Savas and handles vectors of any length. It builds up the products three at a time, attempting to minimize gates. Here is the code.\npython\nfrom base import *\nimport arith\nfrom math import log\nimport utils\n\ndef hamming_weight_direct(v):\n c = v\n result_len = int(log(len(v), 2)) + 1\n print len(v), result_len\n result = [zero] * result_len\n for i in range(result_len):\n s = []\n n = len(c)\n rem = n % 3\n left_over = c[-rem:]\n for j in range(0, n - rem, 3):\n s.append(c[j] + c[j + 1] + c[j + 2])\n if rem != 0:\n s.append(sum(left_over, zero))\n\n result[i] = sum(s, zero)\n \n next_c = []\n\n for j in range(0, n - rem, 3):\n next_c.append(c[j]*c[j + 1] + c[j]*c[j + 2] + c[j + 1]*c[j + 2])\n if rem == 2:\n next_c.append(c[-1]*c[-2])\n for j in range(len(s)):\n t = zero\n for k in range(j + 1, len(s)):\n t += s[j]*s[k]\n next_c.append(t)\n\n c = next_c\n \n return result \n\nIf you run scdf.reduce_depth on this circuit followed by scdf.deduplicate to remove duplicate subcircuits then obtain the gate count, you will find that this approach has a higher gate count than the previous approach for the same $n$. However, this approach yields a smaller depth of exactly $\\log n$.","timestamp":1575087094,"edited":1575087094},{"updates":[],"title":"Parasitic Numbers","slug":"parasitic","summary":"parasitic","image":"","tags":["parasitic"],"content":"When I wrote my last post<\/a> on double trouble numbers, I wasn't aware of parasitic numbers, of which double trouble numbers are a special case (although our definition of double trouble numbers permitted leading zeros, something we will not permit here). In fact, there is a lot out there on these numbers including a good wikipedia article. However, I will take the same approach here as I took in my post on double trouble numbers, generalize it to $n$<\/code>-parasitic numbers and explore some further properties. I recommend that you read my post on double trouble numbers first because I will make use of the ideas it presents.<\/p>\nAn $n$<\/code>-parasitic number $x$<\/code> gives the number $nx$<\/code> when its digits in base $b$<\/code> are right rotated one position. Unlike our treatment of double trouble numbers, we do not permit leading zeros. To preclude leading zeros, the least significant digit must be greater or equal to $n$<\/code>. Generalizing from double trouble numbers, a state<\/em> is now a pair in $\\mathbb{Z}_b \\times \\mathbb{Z}_n$<\/code> and the state transition function $\\delta : \\mathbb{Z}_b \\times \\mathbb{Z}_n \\to \\mathbb{Z}_b \\times \\mathbb{Z}_n$<\/code> is defined as\n$$\\delta : (d, c) \\mapsto (nd + c, \\lfloor (nd + c) \/ b \\rfloor)$$<\/code>\nIn fact, state transition corresponds to multiplication by $n$<\/code> modulo $nb - 1$<\/code>, a fact we will use later. As such, a state<\/em> $(d, c)$<\/code> can be viewed as an integer $cb + d \\in \\mathbb{Z}_{nb - 1}$<\/code>. The code to generate an $n$<\/code>-parasitic number given $n$<\/code>, base $b$<\/code> and least significant digit $d$<\/code> in Python is therefore simple and terse.<\/p>\ndef parasitic(n, b, d):\n \"\"\"Returns a list of the digits of the elementary n-parasitic number in base b \n with least significant digit d (ordered from least significant to most significant) given n, b and d\n \"\"\"\n\n assert n >= 2 and n < b and d >= n and d < b\n\n digits = [d]\n s = n * d\n while s != d:\n digits.append(s % b)\n s = (n * s) % (n*b - 1)\n return digits<\/code><\/pre>\nWe will now prove the following lemma.<\/p>\nLemma<\/strong> The length of an elementary $n$<\/code>-parasitic number in base $b$<\/code> divides the multiplicative order of $b$<\/code> modulo $nb - 1$<\/code><\/em><\/p>\nProof<\/em> $\\;$<\/code>Let $x$<\/code> be an elementary<\/em> $n$<\/code>-parasitic number of length $\\ell$<\/code> with least significant digit $d$<\/code>. Let $m$<\/code> be the multiplicative order of $b$<\/code> modulo $nb - 1$<\/code>. We need to show that $\\ell \\mid m$<\/code>. The following equality follows immediately from the definition of an $n$<\/code>-parasitic number:\n$$b^{\\ell - 1} + \\frac{x - d}{b} = nx$$<\/code>\nWith simple algebra, we can rearrange the above to yield\n$$x = \\frac{d \\cdot (b^\\ell - 1)}{nb - 1}$$<\/code>\nNow for $x$<\/code> to be an integer as required, then $nb - 1$<\/code> must divide $d \\cdot (b^\\ell - 1)$<\/code>. A sufficient condition for this is that $b^\\ell \\equiv 1 \\pmod{nb - 1}$<\/code>, in which case $\\ell = m$<\/code>, the multiplicative order of $b$<\/code> modulo $nb - 1$<\/code>. Suppose $\\ell < m$<\/code>. In this case, it holds that $nb - 1 \\mid d \\cdot (t - 1)$<\/code> where $t = b^\\ell \\pmod{nb - 1}$<\/code>. Now from the equality above, there is a larger $n$<\/code>-parasitic number $x'$<\/code> with the same least significant digit $d$<\/code> whose length is $m$<\/code> and which is equal to $\\frac{d \\cdot (b^m - 1)}{nb - 1}$<\/code>. From the generalization of our result in the double trouble numbers post, we know that all $n$<\/code>-parasitic numbers with the same least significant digit are unique up to concatenation. Since $x$<\/code> is elementary and $x'$<\/code> is formed by concatenating copies of $x$<\/code>, it follows that $\\ell \\mid m$<\/code>. $$\\tag*{\\blacksquare}$$<\/code><\/p>\nAs a corollary, it holds that $\\ell \\mid \\phi(nb - 1)$<\/code> where $\\phi$<\/code> is Euler's totient function. Furthermore, we note that the multiplicative order of $b$<\/code> modulo $nb - 1$<\/code> is equal to the multiplicative order of $n$<\/code> modulo $nb - 1$<\/code>.<\/p>\nLet us define an equivalence relation on $n$<\/code>-parasitic numbers. We write $x \\sim_{n, b} y$<\/code> if the $n$<\/code>-parasitic numbers $x$<\/code> and $y$<\/code> are circular permutations of each other in base $b$<\/code>. Let $P_{n, b}$<\/code> be the set of $n$<\/code>-parasitic numbers in base $b$<\/code>. The number of equivalence classes $|P_{n, b}\/\\sim_{n, b}|$<\/code> under the relation $\\sim_{n, b}$<\/code> is related to the number of cyclotomic cosets of $n$<\/code> modulo $nb - 1$<\/code>. More formally, we have the following equality\n$$|P_{n, b}\/\\sim_{n, b}| = |\\{C_d : d \\in \\{k, \\ldots, b - 1\\}\\}|$$<\/code> where $C_d := \\{d \\cdot n^k : k \\in \\{0, \\ldots, nb - 1\\}\\}$<\/code>. For shorthand, let us define $a_{n}(b) = |P_{n, b}\/\\sim_{n, b}|$<\/code>.<\/p>\nThe sequence generated by $a_2(3), a_2(4), \\ldots$<\/code> is the sequence A006694<\/a> in the OEIS. More precisely, $a_2(b) = \\text{A006694}(b - 1)$<\/code>.<\/p>\nWe will now take a look at the length of $n$<\/code>-parasitic numbers, in particular, the bases that give minimal and maximal lengths. Recall that the minimal length of an $n$<\/code>-parasitic number is 2. The sequence of bases that give minimal length $n$<\/code>-parasitic numbers is generated by $s_n(k) = (n + 1)k + n$<\/code> for $k \\geq 1$<\/code>. To see this, observe that for any $k \\geq 1$<\/code>, setting the least significant digit to $nk + 1$<\/code> yields an $n$<\/code>-parasitic number of length 2 in base $(n + 1)k + n$<\/code>. For $n = 2$<\/code>, we have that $s_2$<\/code> generates the sequence A016789<\/a> in the OEIS.<\/p>\nNow in regard to maximal length $n$<\/code>-parasitic numbers, recall that the maximal length in base $b$<\/code> is $nb - 2$<\/code>. If a base has a maximal length $n$<\/code>-parasitic number, then it follows immediately from our corollary above that $nb - 1$<\/code> is prime. The converse does not necessarily hold. For certain choices of $n$<\/code>, there are no maximal length $n$<\/code>-parasitic numbers. An example is $n = 4$<\/code> because any base $b$<\/code> is a quadratic residue modulo $4b - 1$<\/code> whose square roots are $2b$<\/code> and $2b - 1$<\/code> and therefore its multiplicative order modulo $4b - 1$<\/code> cannot be $4b - 2$<\/code>. We do not have a general form for the bases that give maximal lengths. An interesting special case is $n = 2$<\/code> where the sequence of bases giving maximal lengths is the sequence A051733<\/a> in the OEIS.<\/p>\nWe will make one final remark. For base 10, which is quite a commonly-used base, we discovered that 142857, the best-known cyclic number (in base 10) and which has some other remarkable properties, is the only $n$<\/code>-parasitic number in base 10 whose length is not equal to 18, the multiplicative order of 10 modulo $2 \\cdot 10 - 1$<\/code>. This number is 5-parasitic.<\/p>","raw":"When I wrote my [last post](http:\/\/ciphron.netsoc.ie\/blog?post=dt) on double trouble numbers, I wasn't aware of parasitic numbers, of which double trouble numbers are a special case (although our definition of double trouble numbers permitted leading zeros, something we will not permit here). In fact, there is a lot out there on these numbers including a good wikipedia article. However, I will take the same approach here as I took in my post on double trouble numbers, generalize it to $n$-parasitic numbers and explore some further properties. I recommend that you read my post on double trouble numbers first because I will make use of the ideas it presents.\n\nAn $n$-parasitic number $x$ gives the number $nx$ when its digits in base $b$ are right rotated one position. Unlike our treatment of double trouble numbers, we do not permit leading zeros. To preclude leading zeros, the least significant digit must be greater or equal to $n$. Generalizing from double trouble numbers, a *state* is now a pair in $\\mathbb{Z}_b \\times \\mathbb{Z}_n$ and the state transition function $\\delta : \\mathbb{Z}_b \\times \\mathbb{Z}_n \\to \\mathbb{Z}_b \\times \\mathbb{Z}_n$ is defined as\n$$\\delta : (d, c) \\mapsto (nd + c, \\lfloor (nd + c) \/ b \\rfloor)$$\nIn fact, state transition corresponds to multiplication by $n$ modulo $nb - 1$, a fact we will use later. As such, a *state* $(d, c)$ can be viewed as an integer $cb + d \\in \\mathbb{Z}_{nb - 1}$. The code to generate an $n$-parasitic number given $n$, base $b$ and least significant digit $d$ in Python is therefore simple and terse.\npython\ndef parasitic(n, b, d):\n \"\"\"Returns a list of the digits of the elementary n-parasitic number in base b \n with least significant digit d (ordered from least significant to most significant) given n, b and d\n \"\"\"\n\n assert n >= 2 and n < b and d >= n and d < b\n\n digits = [d]\n s = n * d\n while s != d:\n digits.append(s % b)\n s = (n * s) % (n*b - 1)\n return digits\n\n\nWe will now prove the following lemma.\n\n**Lemma** *The length of an elementary $n$-parasitic number in base $b$ divides the multiplicative order of $b$ modulo $nb - 1$*\n\n*Proof* $\\;$Let $x$ be an *elementary* $n$-parasitic number of length $\\ell$ with least significant digit $d$. Let $m$ be the multiplicative order of $b$ modulo $nb - 1$. We need to show that $\\ell \\mid m$. The following equality follows immediately from the definition of an $n$-parasitic number:\n$$b^{\\ell - 1} + \\frac{x - d}{b} = nx$$\nWith simple algebra, we can rearrange the above to yield\n$$x = \\frac{d \\cdot (b^\\ell - 1)}{nb - 1}$$\nNow for $x$ to be an integer as required, then $nb - 1$ must divide $d \\cdot (b^\\ell - 1)$. A sufficient condition for this is that $b^\\ell \\equiv 1 \\pmod{nb - 1}$, in which case $\\ell = m$, the multiplicative order of $b$ modulo $nb - 1$. Suppose $\\ell < m$. In this case, it holds that $nb - 1 \\mid d \\cdot (t - 1)$ where $t = b^\\ell \\pmod{nb - 1}$. Now from the equality above, there is a larger $n$-parasitic number $x'$ with the same least significant digit $d$ whose length is $m$ and which is equal to $\\frac{d \\cdot (b^m - 1)}{nb - 1}$. From the generalization of our result in the double trouble numbers post, we know that all $n$-parasitic numbers with the same least significant digit are unique up to concatenation. Since $x$ is elementary and $x'$ is formed by concatenating copies of $x$, it follows that $\\ell \\mid m$. $$\\tag*{\\blacksquare}$$\n\nAs a corollary, it holds that $\\ell \\mid \\phi(nb - 1)$ where $\\phi$ is Euler's totient function. Furthermore, we note that the multiplicative order of $b$ modulo $nb - 1$ is equal to the multiplicative order of $n$ modulo $nb - 1$.\n\nLet us define an equivalence relation on $n$-parasitic numbers. We write $x \\sim_{n, b} y$ if the $n$-parasitic numbers $x$ and $y$ are circular permutations of each other in base $b$. Let $P_{n, b}$ be the set of $n$-parasitic numbers in base $b$. The number of equivalence classes $|P_{n, b}\/\\sim_{n, b}|$ under the relation $\\sim_{n, b}$ is related to the number of cyclotomic cosets of $n$ modulo $nb - 1$. More formally, we have the following equality\n$$|P_{n, b}\/\\sim_{n, b}| = |\\{C_d : d \\in \\{k, \\ldots, b - 1\\}\\}|$$ where $C_d := \\{d \\cdot n^k : k \\in \\{0, \\ldots, nb - 1\\}\\}$. For shorthand, let us define $a_{n}(b) = |P_{n, b}\/\\sim_{n, b}|$.\n\nThe sequence generated by $a_2(3), a_2(4), \\ldots$ is the sequence [A006694](http:\/\/oeis.org\/A006694) in the OEIS. More precisely, $a_2(b) = \\text{A006694}(b - 1)$.\n\nWe will now take a look at the length of $n$-parasitic numbers, in particular, the bases that give minimal and maximal lengths. Recall that the minimal length of an $n$-parasitic number is 2. The sequence of bases that give minimal length $n$-parasitic numbers is generated by $s_n(k) = (n + 1)k + n$ for $k \\geq 1$. To see this, observe that for any $k \\geq 1$, setting the least significant digit to $nk + 1$ yields an $n$-parasitic number of length 2 in base $(n + 1)k + n$. For $n = 2$, we have that $s_2$ generates the sequence [A016789](http:\/\/oeis.org\/A016789) in the OEIS.\n\nNow in regard to maximal length $n$-parasitic numbers, recall that the maximal length in base $b$ is $nb - 2$. If a base has a maximal length $n$-parasitic number, then it follows immediately from our corollary above that $nb - 1$ is prime. The converse does not necessarily hold. For certain choices of $n$, there are no maximal length $n$-parasitic numbers. An example is $n = 4$ because any base $b$ is a quadratic residue modulo $4b - 1$ whose square roots are $2b$ and $2b - 1$ and therefore its multiplicative order modulo $4b - 1$ cannot be $4b - 2$. We do not have a general form for the bases that give maximal lengths. An interesting special case is $n = 2$ where the sequence of bases giving maximal lengths is the sequence [A051733](http:\/\/oeis.org\/A051733) in the OEIS.\n\nWe will make one final remark. For base 10, which is quite a commonly-used base, we discovered that 142857, the best-known cyclic number (in base 10) and which has some other remarkable properties, is the only $n$-parasitic number in base 10 whose length is not equal to 18, the multiplicative order of 10 modulo $2 \\cdot 10 - 1$. This number is 5-parasitic.\n","timestamp":1549791628,"edited":1549791628},{"updates":[],"title":"Double Trouble Numbers","slug":"dt","summary":"dt","image":"","tags":["double trouble"],"content":"A few years ago I came across a programming problem concerning numbers that were referred to as Double Trouble (DT) numbers, and I will borrow the term here. I looked into them with a friend, Colm Bhandal, because we found them interesting. I will share some of their properties in this post.<\/p>\nFirst, what is a DT number? A DT number with respect to some base $b$<\/code> is a positive integer $n$<\/code> such that when the digits (in base $b$<\/code>) are right rotated one position, the resulting number is $2n$<\/code> i.e. double the original number. Some of the questions we will answer are: are there an infinite number of DT numbers for a given base and what form do they take? We will first prove a result that will help answer these questions, namely we will show that for every base $b \\geq 2$<\/code>, there exists a DT number with least significant digit $d$<\/code> for all $d \\in \\{1, \\ldots, b - 1\\}$<\/code>. The proof proceeds as follows.<\/p>\nLet $d_{k - 1}b^{k - 1} + d_{k - 2}b^{k - 2} + \\cdots + d_0$<\/code> be a $k$<\/code>-digit DT number. Then by definition of a DT number, we have\n$$d_0b^{k - 1} + d_{k - 1}b^{k - 2} + \\cdots + d_1 = 2 \\cdot (d_{k - 1}b^{k - 1} + d_{k - 2}b^{k - 2} + \\cdots + d_0)$$<\/code>\nThis implies that $d_1 = 2d_0$<\/code> and $d_2 = 2d_1 + c$<\/code> where $c \\in \\{0, 1\\}$<\/code> is the carry of $2 \\cdot d_0$<\/code> i.e. $c = \\lfloor 2d_0 \/ b \\rfloor$<\/code>. More generally, we have $d_{i + 1} = 2d_i + c_i$<\/code> and $c_{i + 1} = \\lfloor 2d_i \/ b \\rfloor$<\/code> with $c_0 = 0$<\/code> and $d_0$<\/code> being the least significant digit.\nWe will view the pair $(d_i, c_i)$<\/code> as a state<\/em>. The state transition function $\\delta : \\mathbb{Z}_b \\times \\{0, 1\\} \\to \\mathbb{Z}_b \\times \\{0, 1\\}$<\/code> is defined as above; that is\n$$\\delta : (d_i, c_i) \\mapsto (2d_i + c_i, \\lfloor 2d_i \/ b \\rfloor)$$<\/code>\nThe starting state for a DT number corresponding to its least significant digit is $(d_0, 0)$<\/code> where $d_0 \\in \\mathbb{Z}_b$<\/code> is the least significant digit. A DT number is generated when the state returns to $(d_0, 0)$<\/code> and the digits of the DT number are the $d_i$<\/code> components of the intermediate states in addition to the starting state. The state will always return to the starting state i.e. it will not get stuck in cycles that exclude the starting state. To see this, observe that every state has a unique predecessor state, and therefore, necessarily, the state must return to the starting state. Hence, starting with any least significant digit, we can generate a finite length DT number in any given base. This proves the result.<\/p>\nThe maximum length of what we call an elementary<\/em> DT number is $2(b - 1)$<\/code> digits since there are a toral of $2b$<\/code> possible states and the states $(0, 0)$<\/code> and $(b - 1, 1)$<\/code> cannot be encountered because a DT number is positive and the latter state transitions to itself. By elementary<\/em>, we mean that the DT number is the minimum length DT number with a chosen least significant digit. The minimum length of a DT number is 2 digits. Now given a DT number with $k$<\/code> digits, concatenating the number (joining its sequence of digits) with itself yields another (non-elementary) DT number with $2k$<\/code> digits. Indeed repeating the process yields DT numbers whose length are arbitrary multiples of $k$<\/code>. And this tells us, there are an infinite number of DT numbers for any chosen base $b \\geq 2$<\/code>.<\/p>\nAn algorithm in Python to find the minimum (by value) DT number for a chosen base is shown below.<\/p>\ndef transition(b, state):\n \"\"\"State transition function with base b.\"\"\"\n\n d, c = state\n return ((2 * d + c) % b, (2 * d) \/\/ b)\n\ndef find_min_dt(b):\n \"\"\" Return a list of the digits of the minumum (by value) DT number for base b ordered from least significant to most significant. \"\"\"\n\n min_digits = None\n min_len = 2 * b\n for lsd in range(1, b):\n digits = [lsd]\n init_state = (lsd, 0)\n state = transition(b, init_state)\n while state != init_state:\n d, c = state\n digits.append(d)\n state = transition(b, state)\n if len(digits) < min_len:\n min_digits = digits\n min_len = len(digits)\n return min_digits<\/code><\/pre>","raw":"A few years ago I came across a programming problem concerning numbers that were referred to as Double Trouble (DT) numbers, and I will borrow the term here. I looked into them with a friend, Colm Bhandal, because we found them interesting. I will share some of their properties in this post.\n\nFirst, what is a DT number? A DT number with respect to some base $b$ is a positive integer $n$ such that when the digits (in base $b$) are right rotated one position, the resulting number is $2n$ i.e. double the original number. Some of the questions we will answer are: are there an infinite number of DT numbers for a given base and what form do they take? We will first prove a result that will help answer these questions, namely we will show that for every base $b \\geq 2$, there exists a DT number with least significant digit $d$ for all $d \\in \\{1, \\ldots, b - 1\\}$. The proof proceeds as follows.\n\nLet $d_{k - 1}b^{k - 1} + d_{k - 2}b^{k - 2} + \\cdots + d_0$ be a $k$-digit DT number. Then by definition of a DT number, we have\n$$d_0b^{k - 1} + d_{k - 1}b^{k - 2} + \\cdots + d_1 = 2 \\cdot (d_{k - 1}b^{k - 1} + d_{k - 2}b^{k - 2} + \\cdots + d_0)$$\nThis implies that $d_1 = 2d_0$ and $d_2 = 2d_1 + c$ where $c \\in \\{0, 1\\}$ is the carry of $2 \\cdot d_0$ i.e. $c = \\lfloor 2d_0 \/ b \\rfloor$. More generally, we have $d_{i + 1} = 2d_i + c_i$ and $c_{i + 1} = \\lfloor 2d_i \/ b \\rfloor$ with $c_0 = 0$ and $d_0$ being the least significant digit. \nWe will view the pair $(d_i, c_i)$ as a *state*. The state transition function $\\delta : \\mathbb{Z}_b \\times \\{0, 1\\} \\to \\mathbb{Z}_b \\times \\{0, 1\\}$ is defined as above; that is\n$$\\delta : (d_i, c_i) \\mapsto (2d_i + c_i, \\lfloor 2d_i \/ b \\rfloor)$$\nThe starting state for a DT number corresponding to its least significant digit is $(d_0, 0)$ where $d_0 \\in \\mathbb{Z}_b$ is the least significant digit. A DT number is generated when the state returns to $(d_0, 0)$ and the digits of the DT number are the $d_i$ components of the intermediate states in addition to the starting state. The state will always return to the starting state i.e. it will not get stuck in cycles that exclude the starting state. To see this, observe that every state has a unique predecessor state, and therefore, necessarily, the state must return to the starting state. Hence, starting with any least significant digit, we can generate a finite length DT number in any given base. This proves the result.\n\nThe maximum length of what we call an *elementary* DT number is $2(b - 1)$ digits since there are a toral of $2b$ possible states and the states $(0, 0)$ and $(b - 1, 1)$ cannot be encountered because a DT number is positive and the latter state transitions to itself. By *elementary*, we mean that the DT number is the minimum length DT number with a chosen least significant digit. The minimum length of a DT number is 2 digits. Now given a DT number with $k$ digits, concatenating the number (joining its sequence of digits) with itself yields another (non-elementary) DT number with $2k$ digits. Indeed repeating the process yields DT numbers whose length are arbitrary multiples of $k$. And this tells us, there are an infinite number of DT numbers for any chosen base $b \\geq 2$.\n\nAn algorithm in Python to find the minimum (by value) DT number for a chosen base is shown below.\npython\ndef transition(b, state):\n \"\"\"State transition function with base b.\"\"\"\n\n d, c = state\n return ((2 * d + c) % b, (2 * d) \/\/ b)\n\ndef find_min_dt(b):\n \"\"\" Return a list of the digits of the minumum (by value) DT number for base b ordered from least significant to most significant. \"\"\"\n\n min_digits = None\n min_len = 2 * b\n for lsd in range(1, b):\n \tdigits = [lsd]\n init_state = (lsd, 0)\n state = transition(b, init_state)\n while state != init_state:\n d, c = state\n digits.append(d)\n state = transition(b, state)\n if len(digits) < min_len:\n min_digits = digits\n min_len = len(digits)\n return min_digits\n","timestamp":1545973976,"edited":1545973976},{"updates":[],"title":"Simple Circuit Description Framework","slug":"scdf","summary":"SCDF","image":"","tags":["scdf"],"content":"I have deprecated SCDL<\/a>. It is much simpler and more terse to express arithmetic circuits in Python. Simple Circuit Description Framework (SCDF) is a simple Python framework to express, manipulate and evaluate arithmetic circuits with particular suitability for fully homomorphic encryption (FHE). It is available at https:\/\/github.com\/ciphron\/scdf<\/a><\/a>. I intend to write\/use Python bindings for the FHE library HElib to port my FHE Evaluator<\/a> to Python and integrate it with this framework.<\/p>\nCircuits are composed of two basic operations: addition (+) and multiplication (). We will consider an example of a Boolean circuit. In the Boolean case, + corresponds to XOR and <\/em> corresponds to AND. In the following example, we will define and evaluate a circuit that checks the equality of two 8-bit integers.<\/p>\nFirst let us define some constants.<\/p>\nimport scdf\n\nzero = scdf.Constant(0)\none = scdf.Constant(1)<\/code><\/pre>\nNow we are ready to define the equaltiy checking circuit for two bits.<\/p>\ndef eq1(x, y):\n return x + y + one<\/code><\/pre>\nNext we will define the equality checking circuit for two arbitrary length integers (of equal length). Each integer is given as a list of bits (more precisely, a wire that evaluates to a bit) ordered from least significant to most significant.<\/p>\ndef eq(xs, ys):\n is_eq = one\n for i in range(len(xs)):\n is_eq *= eq1(xs[i], ys[i])\n return is_eq<\/code><\/pre>\nThe code so far can be found in the file base.py<\/a> included with scdf.<\/p>\nNow we have defined our desired circuit. The next step is to setup the inputs.<\/p>\nNUM_BITS = 8 # we will use 8-bit integers\nx = vect_input('x', NUM_BITS)\ny = vect_input('y', NUM_BITS)<\/code><\/pre>\nWe also have to setup the circuit to use the above inputs. Since, the depth of the circuit (eq) that we defined may not be optimal, we will also avail of the scdf.reduce_depth function.<\/p>\ncirc = scdf.reduce_depth(eq(x, y))<\/code><\/pre>\nTo evaluate a circuit, we have to specify what type the values are, and the addition and multiplicaion operators must be overloaded by this type. For this purpose, we define a simple Bit class.<\/p>\nclass Bit(object):\n def __init__(self, value):\n self.value = value\n\n def __mul__(self, other):\n return Bit((self.value * other.value) % 2)\n\n def __add__(self, other):\n return Bit((self.value + self.value) % 2)<\/code><\/pre>\nTo evaluate the circuit, we need to setup an input map, which maps the names of inputs to their values. In our case, the values will be of type Bit. We have the following helper functions to aid in this task.<\/p>\n def to_bin(n, nbits):\n nbin =  * nbits\n i = 0\n while i < nbits and n != 0:\n nbin[i] = n & 1\n n >>= 1\n i += 1\n return nbin\n\n def add_num_to_map(name, num, input_map):\n bin_rep = to_bin(num, NUM_BITS)\n for i in range(NUM_BITS):\n input_map['%s%d' % (name, i)] = Bit(bin_rep[i])<\/code><\/pre>\nSuppose the first integer, x, is set to 12 and the second integer, y, is set to 13. We write the following code:<\/p>\n input_map = {}\n add_num_to_map('x', 12, input_map)\n add_num_to_map('y', 13, input_map)<\/code><\/pre>\nFinally, we are ready to evaluate the circuit and print the result.<\/p>\n map_constant = lambda v: Bit(v)\n result = scdf.eval_circuit(circ, input_map, map_constant)\n print result.value<\/code><\/pre>\nNote the map_constant function maps constant values given as integers to instances of the appropriate type, in our case, Bit.<\/p>\nThe above code is a gentle illustartion of scdf. For a more complex circuit, see the Direct Sort sorting circuit in sort.py<\/a>. Many FHE schemes support batching. They operate on vectors and support SIMD operations. We model this in scdf also by permitting left and right vector rotations. In sort.py<\/a>, a SIMD variant of the Direct Sort algorithm is implemented and you can see how to use it in test_sort.py<\/a>.<\/p>","raw":"\nI have deprecated [SCDL](http:\/\/ciphron.netsoc.ie\/blog?post=scdl). It is much simpler and more terse to express arithmetic circuits in Python. Simple Circuit Description Framework (SCDF) is a simple Python framework to express, manipulate and evaluate arithmetic circuits with particular suitability for fully homomorphic encryption (FHE). It is available at [https:\/\/github.com\/ciphron\/scdf](https:\/\/github.com\/ciphron\/scdf). I intend to write\/use Python bindings for the FHE library HElib to port my [FHE Evaluator](http:\/\/ciphron.netsoc.ie\/blog?post=fhe) to Python and integrate it with this framework.\n\nCircuits are composed of two basic operations: addition (+) and multiplication (\\*). We will consider an example of a Boolean circuit. In the Boolean case, + corresponds to XOR and \\* corresponds to AND. In the following example, we will define and evaluate a circuit that checks the equality of two 8-bit integers.\n\nFirst let us define some constants.\npython\nimport scdf\n\nzero = scdf.Constant(0)\none = scdf.Constant(1)\n\nNow we are ready to define the equaltiy checking circuit for two bits.\npython\ndef eq1(x, y):\n return x + y + one\n\nNext we will define the equality checking circuit for two arbitrary length integers (of equal length). Each integer is given as a list of bits (more precisely, a wire that evaluates to a bit) ordered from least significant to most significant.\npython\ndef eq(xs, ys):\n is_eq = one\n for i in range(len(xs)):\n \tis_eq *= eq1(xs[i], ys[i])\n return is_eq\n\n\nThe code so far can be found in the file [base.py](https:\/\/github.com\/ciphron\/scdf\/blob\/master\/base.py) included with scdf.\n\nNow we have defined our desired circuit. The next step is to setup the inputs.\npython\nNUM_BITS = 8 # we will use 8-bit integers\nx = vect_input('x', NUM_BITS)\ny = vect_input('y', NUM_BITS)\n\nWe also have to setup the circuit to use the above inputs. Since, the depth of the circuit (eq) that we defined may not be optimal, we will also avail of the scdf.reduce_depth function.\npython\ncirc = scdf.reduce_depth(eq(x, y))\n\nTo evaluate a circuit, we have to specify what type the values are, and the addition and multiplicaion operators must be overloaded by this type. For this purpose, we define a simple Bit class.\npython\nclass Bit(object):\n def __init__(self, value):\n self.value = value\n\n def __mul__(self, other):\n return Bit((self.value * other.value) % 2)\n\n def __add__(self, other):\n return Bit((self.value + self.value) % 2)\n\nTo evaluate the circuit, we need to setup an input map, which maps the names of inputs to their values. In our case, the values will be of type Bit. We have the following helper functions to aid in this task.\n\npython\n def to_bin(n, nbits):\n nbin =  * nbits\n i = 0\n while i < nbits and n != 0:\n nbin[i] = n & 1\n n >>= 1\n i += 1\n return nbin\n\n def add_num_to_map(name, num, input_map):\n bin_rep = to_bin(num, NUM_BITS)\n for i in range(NUM_BITS):\n input_map['%s%d' % (name, i)] = Bit(bin_rep[i])\n\nSuppose the first integer, x, is set to 12 and the second integer, y, is set to 13. We write the following code:\npython\n input_map = {}\n add_num_to_map('x', 12, input_map)\n add_num_to_map('y', 13, input_map)\n\nFinally, we are ready to evaluate the circuit and print the result.\npython\n map_constant = lambda v: Bit(v)\n result = scdf.eval_circuit(circ, input_map, map_constant)\n print result.value\n\nNote the map_constant function maps constant values given as integers to instances of the appropriate type, in our case, Bit.\n\nThe above code is a gentle illustartion of scdf. For a more complex circuit, see the Direct Sort sorting circuit in [sort.py](https:\/\/github.com\/ciphron\/scdf\/blob\/master\/sort.py). Many FHE schemes support batching. They operate on vectors and support SIMD operations. We model this in scdf also by permitting left and right vector rotations. In [sort.py](https:\/\/github.com\/ciphron\/scdf\/blob\/master\/sort.py), a SIMD variant of the Direct Sort algorithm is implemented and you can see how to use it in [test_sort.py](https:\/\/github.com\/ciphron\/scdf\/blob\/master\/test_sort.py).","timestamp":1545197885,"edited":1545197885},{"updates":[],"title":"Fully Homomorphic Encryption Evaluator","slug":"fhe","summary":"FHE Evaluator","image":"","tags":["fhe"],"content":"I wrote a fully homomorphic encryption (FHE) evaluator which you can find at https:\/\/github.com\/ciphron\/fhe<\/a><\/a>. This software runs programs written in SCDL<\/a> (see the link for compiling imperative languages to SCDL) on encrypted data. It allows you to generate a public and private key, encrypt the inputs of an SCDL program to produce a ciphertext file and evaluate the SCDL program homomorphically with a chosen ciphertext file. The FHE evaluator uses HElib<\/a> in the backend, which is an implementation of the BGV<\/a> scheme. When you generate a keypair, you must choose a security parameter and a maximum circuit depth to support. The former corresponds to the security strength (in bits) e.g: 128 bits. The latter comes into play because the current version of HElib does not implement bootstrapping and therefore, only circuits of bounded depth can be evaluated. The program allows you to check the depth of a circuit described in an SCDL file. Using the evaluator, you will quickly see that FHE is presently by no means practical. However you will be able to run small simple programs on encrypted data. I must point out that my implementation does not exploit the full power of the HElib library insofar as vectors are not encrypted, only single bits. This means the ciphertexts are far less compact than they could be. I hope to add vector support at a later stage. Also, the implementation probably has bugs which need to be ironed out. Email me with details if you find any. Hopefully, this program will be useful to someone for experimentation or research use.<\/p>","raw":"\nI wrote a fully homomorphic encryption (FHE) evaluator which you can find at [https:\/\/github.com\/ciphron\/fhe](https:\/\/github.com\/ciphron\/fhe). This software runs programs written in [SCDL](\/blog?post=scdl) (see the link for compiling imperative languages to SCDL) on encrypted data. It allows you to generate a public and private key, encrypt the inputs of an SCDL program to produce a ciphertext file and evaluate the SCDL program homomorphically with a chosen ciphertext file. The FHE evaluator uses [HElib](https:\/\/github.com\/shaih\/HElib) in the backend, which is an implementation of the [BGV](https:\/\/eprint.iacr.org\/2011\/277) scheme. When you generate a keypair, you must choose a security parameter and a maximum circuit depth to support. The former corresponds to the security strength (in bits) e.g: 128 bits. The latter comes into play because the current version of HElib does not implement bootstrapping and therefore, only circuits of bounded depth can be evaluated. The program allows you to check the depth of a circuit described in an SCDL file. Using the evaluator, you will quickly see that FHE is presently by no means practical. However you will be able to run small simple programs on encrypted data. I must point out that my implementation does not exploit the full power of the HElib library insofar as vectors are not encrypted, only single bits. This means the ciphertexts are far less compact than they could be. I hope to add vector support at a later stage. Also, the implementation probably has bugs which need to be ironed out. Email me with details if you find any. Hopefully, this program will be useful to someone for experimentation or research use.\n","timestamp":1487166794,"edited":1487166794},{"updates":[],"title":"Simple Circuit Description Language","slug":"scdl","summary":"SCDL","image":"","tags":["scdl"],"content":"Simple Circuit Description Language (SCDL) is a simple functional language to describe an arithmetic circuit over some ring. It has two basic operations, addition (+) and multiplication (). The language can be interpreted in any ring. For example, when interpreted in the Boolean field, "+" corresponds to XOR and "<\/em>" corresponds to AND. I developed this language for the purpose of Fully Homomorphic Encryption (FHE). FHE allows computation to be performed on encrypted data but current schemes evaluate the computation as a circuit. Most such schemes evaluate circuits that consist of AND and XOR gates. So far an evaluator has been implemented for SCDL that can be found at https:\/\/github.com\/ciphron\/scdl<\/a><\/a>. For documentation on the language and how to use the evaluator, see https:\/\/github.com\/ciphron\/scdl\/blob\/master\/scdl.pdf<\/a><\/a>.<\/p>\nFairplayMP Simple Function Description Language (SFDL) v2.0 Compiler to SCDL<\/h3>\nThe FairplayMP project (http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP.html<\/a><\/a>) incorporates a compiler for a language called Simple Function Description Language (SFDL) that translates the SFDL code into a Boolean circuit. SFDL version 2.0 (http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP\/SFDL2.0.pdf<\/a>) is a simple imperative language with some constraints such as bounded loops (i.e. supporting a maximum number of iterations given by a constant) and no recursion. The circuit that is generated by the SFDLv2.0 compiler is represented by a low-level language created by the FairplayMP developers. I've written a compiler that translates that low-level language into SCDL. A vars file (see the SCDL documentation) is also automatically produced that gives a high-level description of the inputs and outputs to a program in terms of their names and types. The compiler is available at https:\/\/github.com\/ciphron\/sfdl_to_scdl<\/a><\/a>. This compiler allows you to simply compile an SFDL file into SCDL (it first runs the SFDLv2.0 compiler in the background). Say you want to compile the included example max.sfdl, then simply run .\/compile max.sdfl. Two files are produced: max.scdl and the associated vars file max.scdl.vars. Then you can use the SCDL evaluator to evaluate the circuits and the outputs are printed according to their types.<\/p>","raw":"\nSimple Circuit Description Language (SCDL) is a simple functional language to describe an arithmetic circuit over some ring. It has two basic operations, addition (+) and multiplication (\\*). The language can be interpreted in any ring. For example, when interpreted in the Boolean field, \"+\" corresponds to XOR and \"*\" corresponds to AND. I developed this language for the purpose of Fully Homomorphic Encryption (FHE). FHE allows computation to be performed on encrypted data but current schemes evaluate the computation as a circuit. Most such schemes evaluate circuits that consist of AND and XOR gates. So far an evaluator has been implemented for SCDL that can be found at [https:\/\/github.com\/ciphron\/scdl](\"https:\/\/github.com\/ciphron\/scdl\"). For documentation on the language and how to use the evaluator, see [https:\/\/github.com\/ciphron\/scdl\/blob\/master\/scdl.pdf](https:\/\/github.com\/ciphron\/scdl\/blob\/master\/scdl.pdf).\n\n\n### FairplayMP Simple Function Description Language (SFDL) v2.0 Compiler to SCDL\nThe FairplayMP project ([http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP.html](http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP.html)) incorporates a compiler for a language called Simple Function Description Language (SFDL) that translates the SFDL code into a Boolean circuit. SFDL version 2.0 (http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP\/SFDL2.0.pdf) is a simple imperative language with some constraints such as bounded loops (i.e. supporting a maximum number of iterations given by a constant) and no recursion. The circuit that is generated by the SFDLv2.0 compiler is represented by a low-level language created by the FairplayMP developers. I've written a compiler that translates that low-level language into SCDL. A vars file (see the SCDL documentation) is also automatically produced that gives a high-level description of the inputs and outputs to a program in terms of their names and types. The compiler is available at [https:\/\/github.com\/ciphron\/sfdl_to_scdl](https:\/\/github.com\/ciphron\/sfdl_to_scdl). This compiler allows you to simply compile an SFDL file into SCDL (it first runs the SFDLv2.0 compiler in the background). Say you want to compile the included example max.sfdl, then simply run .\/compile max.sdfl. Two files are produced: max.scdl and the associated vars file max.scdl.vars. Then you can use the SCDL evaluator to evaluate the circuits and the outputs are printed according to their types.\n","timestamp":1487166730,"edited":1487166730}]